Some Security History - Security for Today - Computer Security Basics, 2nd Edition (2011)

Computer Security Basics, 2nd Edition (2011)

Part I. Security for Today

Chapter 2. Some Security History

Computer security is a hot issue today, but it’s an issue that’s been simmering for many years. The development of government security regulations and standards, research into security mechanisms, and debates over the threats to information and the costs of protecting against these threats—all of these activities are well into their fourth decade. Computer security itself isn’t new. What’s new is security’s broader focus (security means more than just keeping outsiders out) and its wider appeal (security is important to business and folks at home as well as government).

This chapter describes how we got to where we are today. It summarizes key events in the history of computer security, discusses some of the government standards and programs involved with computer security, and introduces the concept of computer databases and the preservation of privacy.

Information and Its Controls

Information security is almost as old as information itself. Whenever people develop new methods of recording, storing, or transmitting information, these innovations are almost inevitably followed by methods of harnessing the new technologies and protecting the information they process. They’re also followed by government investigations and controls. For example:

§ In 1793, the first commercial semaphore system (use of mechanized flags) was established between two locations near Paris. Semaphore signaling came to be used throughout France, Italy, Germany, and Russia. Thousands were employed manning the stations, which operated at a speed of about 15 characters per minute. Code books were used so that whole sentences could be represented by a few characters. Semaphores weren’t very successful in England because of fog and smoke, but in the United States, systems of this kind are the reason so many communities have geographic names such as Signal Hill, Beacon Rock, Signal Butte, and Semaphore Pointe.

§ With Samuel F.B. Morse’s introduction of the telegraph came concerns for protecting the confidentiality of transmitted messages. In 1845, just a year after the invention, a commercial encryption code was developed to keep the transmitted messages secret.

§ Within five years of the introduction of the telephone in 1881, a patent application was filed for a voice scrambler.

§ In the 1920s, the use of telephone wiretaps by both government and criminal forces resulted in a public outcry, Congressional hearings, and, ultimately, legislation prohibiting most wiretapping.

§ In the 1930s, Title VI of the Communications Act of 1934 prohibited unauthorized interception and publication of communications by wire or radio, while giving the President certain powers to deal with communication matters in the event of war or other national emergency.

§ In the 1940s, concerns about controlling the proliferation of information about atomic energy led to the Atomic Energy Act of 1946. This act created a Restricted Data category of information requiring special protection and penalties for dissemination. Similar controls have been imposed on new advances in other scientific fields.

§ In the 1980s, the Defense Authorization Act specified controls on technical information about emerging military and space technologies.

§ In the 1990s, attention turned to not just keeping out the bad guys, but keeping out the bad codes. Viruses had been around for some time, but it took dramatic increases in the number of home and business computers to make viruses a plague for every one.

§ In the 2000s, viruses have diverged into spyware, which detects and reports a user’s Internet activities, adware, which presents the user with targeted advertisements for goods and services, and malware, which is code designed to cause harm or further the illicit ends of the perpetrator.

§ In the months following the terrorist attacks on 9/11, the Homeland Protection Act granted broad executive powers related to gathering of information from private communications.

Like any other new technology, computers have raised substantial questions about the degree to which the technology should be controlled—and by whom. Even newer technologies—for example, imaging systems that may impact the integrity of legal and financial documents, including U.S. currency—will no doubt raise the same types of complex issues. Widely available software that allows users to doctor electronic photographs makes it hard to determine what is real and what is designed to entertain or to persuade. Computers and wireless communications also make it easy to eavesdrop.

One ongoing debate in the computer security world is over the government’s restriction of technological information. Government needs to protect certain kinds of information, such as national defense data and the take from intelligence gathering activities. Particular security technologies—for example, cryptographic products —are very effective at safeguarding such information. Should the government be able to control who can and cannot buy such technologies? Should there be any limits on such sales? For example, should enemy governments be able to buy cryptographic products that may make it more difficult for U.S. intelligence operations to monitor these nations’ communications? What about information concerning the technologies themselves, for example, technical papers about cryptographic algorithms? Should these have to be submitted for government examination and possible censorship? Encryption technologies have been variously classified as munitions or as a normal part of software. Is there a need to stifle the development of products developed privately that may inadvertently mimic (or possibly outperform) existing or proposed government communications technologies? Can technology and the free exchange of intellectual data flourish in an environment that tries to control certain kinds of intellectual exchanges?

A somewhat more alarming trend has been the government role in limiting or suppressing the use of cryptographic techniques between private parties in the United States. One such method, Pretty Good Privacy (PGP), was promulgated at tremendous sacrifice to its developer. At length, PGP prevailed, but some in the cryptographic community are concerned that its commercialization may have encouraged a government-sponsored backdoor that allows easy transmission decoding within the constraints of the legal system. A similar situation has been seen in the telecommunications industry: “law-enforcement ports” have begun to appear in commercial telephone switchgear, creating the possibility of wiretapping without due process.

Another debate concerns the involvement of the government in mandating the protection of nongovernment information. Should the government have any control over the protection of such information? Who gets to decide whether information such as productivity statistics, geological surveys, and health information must be protected from public scrutiny? From whom is it being protected? In 2003, a graduate student compiled a list of all the connections into and out of a major city using publicly available data. Debate ranged from whether the document should be classified to whether the student had gone beyond the scope of a standard research paper and had in fact committed a crime by assembling such a document. Should the government impose the same security standards on systems used to process commercial information as those imposed on systems for government information? The importance of the commercial infrastructure to the economy suggests that the commercial infrastructure deserves attention, similar to a bridge, tunnel, or airport.

As you’d expect, different people have a variety of opinions about these questions. We’ll discuss such questions and representative opinions throughout this book.

Computer Security: Then and Now

In the early days of computing, computer systems were large, rare, and very expensive. Naturally enough, those organizations lucky enough to have a computer tried their best to protect it. Computer security was simply one aspect of general plant security. Computer buildings, floors, and rooms were guarded and alarmed to prevent outsiders from intruding and disrupting computer operations. Security concerns focused on physical break-ins, the theft of computer equipment, and the physical theft or destruction of disk packs, tape reels, punched cards, and other media. (This was in the day when you could destroy a program by grabbing its card deck and scattering it to the wind. Incorrect reassembly of the card deck could cause some memorable computer errors. It was important to choose carefully one’s victim in such a prank, lest they be faster or stronger than they appeared.)

Insiders were also kept at bay. Few people knew how to use computers, and only those who knew the secrets of the machine were privileged to stand in its presence. Most users never saw the computers that crunched their numbers. Batch processing meant that users submitted carefully screened jobs—often through protected slots in the doors of computer rooms—to operators who actually put the machine through its paces.

Times changed. During the late 1960s and 1970s, computer technology was transformed, and with it the ways in which users related to computers and data. Multi-programming, time-sharing, and networking dramatically changed the rules of the game. Users could now interact directly with a computer system via a terminal, giving them more power and flexibility but also opening up new possibilities for abuse. Acoustic couplers—modems with foam pads into which a telephone handset was inserted—allowed connectivity not just in the computer room or building, but from cities far away.

Telecommunications—the ability to access computers from remote locations and to share programs and data—radically changed computer usage. Large businesses began to automate and store online information about their customers, vendors, and commercial transactions. Networks linked minicomputers together and allowed them to communicate with each other and with mainframes containing large online databases. It became much easier to make wholesale changes to data—and much easier for errors to wreak widespread damage. Banking and the transfer of assets became an electronic business.

The increased ease and flexibility of computer access also had a dramatic impact on education. Universities and other schools that could not afford their own computer installations now found it possible to tie into computer networks and centralized computers and databases.

Finally, personal computers put the processing power on the desktop. Expensive mainframe connections were useful, but not always needed. More and more students had an opportunity to experiment with computers, and computers increasingly became a part of the curriculum. The result was a huge increase in the number of people who knew how to use computers. The desire to access shared resources led to interconnection of computers, which led eventually to the Internet. Today, if some resource is not available in the machine on your desk, that machine can connect to another that has the resource.

Inevitably, the increased availability of online systems and information led to abuses. Computer security concerns broadened. Instead of worrying only about intrusions by outsiders into computer facilities and equipment (and an occasional computer operator going berserk), organizations now had to worry about computers that were vulnerable to sneak attacks over telephone lines, and information that could be stolen or changed by intruders who didn’t leave a trace. Incidents of computer crime began to be reported. Individuals and government agencies expressed concerns about the invasion of privacy posed by the availability of individual financial, legal, and medical records on shared online databases. As computer terminals and modems became more affordable, these perils intensified.

The 1980s saw the dawn of a new age of computing. With the introduction of the personal computer, individuals of all ages and occupations became computer users. Computers appeared on desks at home and at the office. Small children learned to use computers before they could read. As the price of systems dropped, and as inexpensive accounting packages became available, more and more small businesses automated their operations. PC technology introduced new risks. Precious and irreplaceable corporate data was now stored on diskettes, which could too easily be lost or stolen.


Some pundits have noted that computers actually have not changed the amount of work done in an office, merely its speed. The same accountant that used to make trial balances near the end of the month now can close the books each evening.

Another artifact of broad computerization is the shifting of work. For instance, authors once typed manuscripts, leaving the entire production process to artists and others who saw the publication through to the presses, once the words were right. Today, word processing programs include formatting commands that place the burden of typography and type selection at the feet of those who create the content.

A final observation is that computers may actually do little to save money. Where once a firm may have employed several accountants, today there may be one or two, but in their place there is now a squad of IT professionals and telecommunications experts needed to keep the systems running.

As PC networks proliferated, so did the use of electronic mail, and bulletin boards, dramatically increasing the ability of users to communicate with other users and computers, and vastly raising the security stakes. In the past, even a skilled attacker might have breached security only in a single computer, installation, or local area network; now she had the potential to disrupt nationwide or even worldwide computer operations.

The 1980s also saw systems under attack. Theoretical possibilities came to life as War Games-like break-ins played themselves out on the front pages of local newspapers: the Internet worm, the Friday the 13th virus, the West German hackers of Cuckoo’s Egg fame, and players at espionage. Government, business, and individual users suddenly saw the consequences of ignoring security risks.

The 1990s faced the challenges brought by open systems, distributed computing, and massive interrelationships between network elements. Along with an increasing dependence on networks and the need to share data, applications, and hardware/software resources across vendor boundaries came increasing security risks. In this decade the security business came of age, with more vendors developing trusted systems, bundling security functions, and beginning the development of biometric devices and network security products such as firewalls and intrusion detection systems. Still, security systems lagged behind the technologies they sought to control. Both businesses and individuals lagged still further behind the growing ranks of attackers, many of whom were no longer elite hackers, but an annoying and dangerous breed of vandal known as the script kiddie, short on skill, long on a desire to interrupt computation for the sake of doing so.

In the 2000s, particularly after the attacks of 9/11, security took on a serious tone. Corporations and government alike became more willing to make security an integral part of their products and their jobs. New certifications for IT workers, such as the CompTIA’s Security+, the (ISC)2Certified Information System Security Professional CISSP, and the Cisco Certified Security Professional (CCSP) popped up, making security a hot ticket in an otherwise flagging industry.

The challenge of this decade will be to consolidate what we’ve learned—to build computer security into our products and our daily routines, to protect data without unnecessarily impeding our ability to access it, and to make sure that both security products and government and industry standards grow to meet the ever-increasing scope and challenges of technology.

Early Computer Security Efforts

The earliest computer-related security activities began in the 1950s, with the development of the first TEMPEST security standard, the consideration of security issues in some of the earliest computer system designs, and the establishment of the first government security organization, the U.S. Communications Security (COMSEC) Board. The board, which consisted of representatives from many different branches of the government, oversaw the protection of classified information.


The protection of computer data has always traveled hand in hand with the protection of computer signals. Like radio transmitters, the circuits and wires of a computer system or other device with an embedded computer radiate electromagnetic waves that vary in nature according to the information being processed. Because they pose a security threat, there are standards that limit these emanations as well as standards that strive to prevent outright data theft.

Although these events set the scene for later computer security advances, the 1960s marked the true beginning of the age of computer security, with initiatives by the Department of Defense, the National Security Agency, and the National Bureau of Standards (now the National Institute of Standards and Technology or NIST), coupled with the first public awareness of security. The Spring Joint Computer Conference of 1967 is generally recognized as being the locale for the first comprehensive computer security presentation for a technical audience. Willis H. Ware of the RAND Corporation chaired a session that addressed the wide variety of vulnerabilities present in resource-sharing, remote-access computer systems. The session addressed threats ranging from electromagnetic radiation to bugs on communications lines to unauthorized programmer and user access to systems and data.

The Department of Defense, because of its strong interest in protecting military computers and classified information, was an early partisan of computer security efforts. In 1967, DoD began to study the potential threats to DoD computer systems and information. In October of that year, DoD assembled a task force under the auspices of the Defense Science Board within the Advanced Research Projects Agency (ARPA), now known as the Defense Advanced Research Projects Agency, or DARPA. The task force worked for the next two years examining systems and networks, identifying vulnerabilities and threats, and introducing methods of safeguarding and controlling access to defense computers, systems, networks, and information. Published as a classified document in 1970, the task force report, Security Controls for Computer Systems, was a landmark publication in the history of computer security. Its recommendations, and the research that followed its publication, led to a number of programs dedicated to protecting classified information and setting standards for protection.

The Department of Defense took to heart the recommendations of the task force and began to develop regulations for enforcing the security of the computer systems, networks, and classified data used by DoD and its contractors. In 1972, DoD issued a directive[2] and an accompanying manual[3] that established a consistent DoD policy for computer controls and techniques. The directive stated overall policy as follows:

Classified material contained in an ADP system shall be safeguarded by the continuous employment of protective features in the system’s hardware and software design and configuration.

The directive also stipulated that systems specifically protect both the computer equipment and the data that it processes by preventing deliberate and inadvertent access to classified material by unauthorized persons, as well as unauthorized manipulation of the computer and associated equipment.

During the 1970s, under the sponsorship of DoD and industry, a number of major initiatives were undertaken to better understand the system vulnerabilities and threats that early studies had exposed and to begin to develop technical measures for countering these threats. These initiatives fell into three general categories: tiger teams, security research studies, and development of the first secure operating systems.

Tiger Teams

During the 1970s, tiger teams first emerged on the computer scene. Tiger teams were government- and industry-sponsored teams of crackers who attempted to break down the defenses of computer systems in an effort to uncover, and eventually patch, security holes. Most tiger teams were sponsored by DoD, but IBM aroused a great deal of public awareness of computer security by committing to spend $40 million to address computer security issues—and tiger teams were an important part of finding security flaws in the company’s own products.[4]

Tiger teams were an effective way to find and fix security problems, but their efforts were necessarily piecemeal. U.S. Air Force Lieutenant General Lincoln D. Faurer, former Director of the National Security Agency, wrote that the efforts of the tiger teams resulted in two significant conclusions:[5]

Attempts to correct (patch) identified security vulnerabilities were not sufficient to prevent subsequent repenetrations. New tiger teams often found security flaws not found by earlier tiger teams, and one could not rely on the failure of a penetration effort to indicate that there were no exploitable security flaws.

The only apparent means of guaranteeing the protection of system resources would be to design verifiable protection mechanisms into computer systems.

Tiger teams served a useful function by identifying security flaws and demonstrating how easily these flaws could be exploited. In fact, the current model in the hacker community of discovering vulnerabilities through probing, alerting manufacturers, and after a suitable waiting period, publishing a proposed exploit has its roots in tiger team methodology. The commercial practice of penetration testing, or running pen tests, has its roots in tiger teams as well.

By the end of the 1970s, however, it was apparent that a more rigorous method of building, testing, and evaluating computer systems was needed. Although this was achieved in certain defense-related systems, the commercial world in many instances is still waiting.

Research and Modeling

During the 1970s, DoD and other agencies sponsored a number of ground-breaking research projects aimed at identifying security requirements, formulating security policy models, and defining recommended guidelines and controls.

In the research report of the Computer Security Technology Planning Study,[6] James P. Anderson introduced the concept of a reference monitor, an entity that “enforces the authorized access relationships between subjects and objects of a system.” The idea of a reference monitor became very important to the development of standards and technologies for secure systems. The concept of reference monitors and the function of subjects and objects in secure systems are described in Appendix C.

DoD went on to sponsor additional research and development in the 1970s focusing on the development of security policy models. A security policy defines system security by stating the set of laws, rules, and practices that regulate how an organization manages, protects, and distributes sensitive information. The mechanisms necessary to enforce a security policy usually conform to a specific security model. This is also discussed in Appendix C.

A number of additional technical research reports published during the 1970s defined secure systems and security requirements.[7]During the 1970s, David Bell and Leonard LaPadula developed the first mathematical model of a multilevel security policy. The Bell and LaPadula model[8] was central to the development of basic computer security standards and laid the groundwork for a number of later security models, and their application in government security standards. Although current security literature tends to overlook the mathematical basis of security and encryption in favor of a more practical approach, behind almost any durable, robust security concept is a solid string of mathematics proving it valid.

Secure Systems Development

A number of government-sponsored projects undertook to develop the first “secure” systems during the 1970s. Most of these efforts were devoted to developing prototypes for security kernels. A security kernel is the part of the operating system that controls access to system resources. The most significant was an Air Force-funded project that led to the development of a security kernel for the Multics (Multiplexed Information and Computing Service) system. [9]

Multics allowed users with different security clearances to simultaneously access information that had been classified at different levels. Because it embodied so many well-designed security features, the Multics system was particularly important to the development of later secure systems. Multics was a large-scale, highly interactive computer system that offered both hardware- and software-enforced security. Specific features of Multics included extensive password and login controls; data security through access control lists (ACLs), an access isolation mechanism (AIM), and a ring mechanism; auditing of all system access operations; decentralized system administration; and architectural features such as paged and segmented virtual memory and stack-controlled process architecture.

Other security kernels under development during the 1970s included Mitre Corporation’s Digital Equipment Corporation PDP-11/45[10] and UCLA’s Data Secure Unix PDP-11/70.[11]

[2] Security Requirements for Automatic Data Processing (ADP) Systems (DoD 5200.28), 1972, revised 1978.

[3] ADP Security Manual—Techniques & Procedures for Implementing, Deactivating, Testing, and Evaluating Secure Resource-Sharing ADP Systems (DoD 5200.28-M), 1973, revised 1979.

[4] Reports describing the efforts of tiger teams included:

C.R. Attanasio, P.W. Markstein, and R.J. Phillips, “Penetrating an Operating System: A Study of VM/370 Integrity,” IBM Systems Journal, Volume 15, Number 1, 1974.

R. Bisbey, G. Popek, and J. Carlstedt, Protection Errors in Operating Systems, USC Information Sciences Institute, 1978.

P.A. Karger and Schell, R.R., Multics Security Evaluation: Vulnerability Analysis, (ESD-TR-74-193), Electronic Systems Division, U.S. Air Force, Hanscom Air Force Base, Bedford (MA), 1974 (available from NTIS: AD A001120).

[5] Lincoln D. Faurer, “Computer Security Goals of the Department of Defense,” Computer Security Journal, Summer 1984.

[6] J.P. Anderson, Computer Security Technology Planning Study, (ESD-TR-73-51), Electronic Systems Division, U.S. Air Force, Hanscom Air Force Base, Bedford (MA), 1972 (available from NTIS: AD 758206).

[7] These reports included:

Secure Minicomputer Operating System (KSOS) Department of Defense Kernelized Secure Operating System, (WDL-7932), Ford Aerospace and Communications Corporation, 1978.

P.G. Neumann et al, A Provably Secure Operating System: The System, Its Application, and Proofs, Final Report, Project 4332, SRI International, Menlo Park (CA), 1977.

[8] D.E. Bell and L.J. LaPadula, Secure Computer Systems: Mathematical Foundations and Model, (M74-244), Mitre Corporation, Bedford (MA), 1973 (available from NTIS: AD 771543).

[9] For information about Multics, see J.C. Whitmore et al, Design for Multics Security Enhancements, (ESD-TR74-176), Honeywell Information Systems, Cambridge (MA), 1973. (Available from NTIS: AD A030801.) See also Elliot Organick, The Multics System: An Examination of Its Structure, MIT Press, Cambridge (MA), 1975.

[10] W.L. Schiller, The Design and Specification of a Security Kernel for the PDP-11/45, ESD-TR-75-69, Mitre Corporation, Bedford (MA), 1975 (available from NTIS: AD A011712).

[11] Popek et al., “UCLA Secure UNIX,” Proceedings, National Computer Conference, New York (NY), 1979.

Building Toward Standardization

Late in the 1970s, two important government initiatives significantly affected the development of computer security standards and methods. In 1977, the Department of Defense announced the DoD Computer Security Initiative under the auspices of the Under Secretary of Defense for Research and Engineering. The goal was to focus national attention and resources on computer security issues. The initiative was launched in 1978 when DoD called together government and industry participants in a series of seminars. The goal of the seminars was to answer these questions:

§ Are secure computer systems useful and feasible?

§ What mechanisms should be developed to evaluate and approve secure computer systems?

§ How can computer vendors be encouraged to develop secure computer systems?

The second important initiative came from the National Bureau of Standards (NBS), now known as the National Institute of Standards and Technology. NIST has historically been responsible for the development of standards of all kinds. As a consequence of the Brooks Act of 1965 (described in "Computer Security Act" later in this chapter), NIST (as NBS) became the agency responsible for researching and developing standards for federal computer purchase and use, and for assisting other agencies in implementing these standards. The bureau has published many federal standards known as Federal Information Processing Standards publications (FIPS PUBs) in all areas of computer technology, including computer security. Over the course of the next decade or so after the Brooks Act, NBS focused on two distinct security standardization efforts: development of standards for building and evaluating secure computer systems, and development of a national standard for cryptography.

Standards for Secure Systems

NBS’s first charge was to evaluate the federal government’s overall computer security needs and to begin to find ways to meet them. Early efforts, based on NBS’s Brooks Act mandate, included the following:


NBS performed an initial study to evaluate the government’s computer security needs.


NBS sponsored a conference on computer security in collaboration with the ACM.


NBS initiated a program aimed at researching development standards for computer security.


NBS began a series of Invitational Workshops dedicated to the Audit and Evaluation of Computer Systems. These had far-reaching consequences for the development of standards for secure systems.

At the first Invitational Workshop in 1977, 58 experts in computer technology and security assembled to define problems and develop solutions for building and evaluating secure systems. Invitees represented NBS, the General Accounting Organization (GAO), other government agencies, and industry. Their goal? To determine:

What authoritative ways exist, or should exist, to decide whether a particular computer system is “secure enough” for a particular intended environment or operation, and if a given system is not “secure enough,” what measures could or should be taken to make it so.

Workshop participants considered many different aspects of computer security, including accuracy, reliability, timeliness, and confidentiality. The NBS workshops resulted in the publication of several reports.[12] These concluded that achieving security required attention to all three of the following:


What security rules should be enforced for sensitive information?


What hardware and software mechanisms are needed to enforce the policy?


What needs to be done to make a convincing case that the mechanisms do support the policy even when the system is subject to threats?

The NBS report stated:

By any reasonable definition of “secure” no current operating system today can be considered “secure”. . . We hope the reader does not interpret this to mean that highly sensitive information cannot be dealt with securely in a computer, for of course that is done all the time. The point is that the internal control mechanisms of current operating systems have too low integrity for them to . . . effectively isolate a user on the system from data that is at a “higher” security level than he is trusted . . . to deal with.

This conclusion was an important one in terms of the multilevel security concepts discussed in Part II of this book.

The NBS workshops recommended that a number of actions be taken. One action was to formulate a detailed computer security policy for sensitive information not covered by national security policies and guidelines. Another was to establish a formal security and evaluation and accreditation process, including the publication of a list of approved products to guide specification and procurement of systems intended to handle sensitive information. A third was to establish a standard, formalized, institutionalized technical means of measuring or evaluating the overall security of a system.

As an outgrowth of the NBS workshops, the Mitre Corporation was assigned the task of developing an initial set of computer security evaluation criteria that could be used to assess the degree of trust that could be placed in a computer system that protected classified data. Beginning in 1979, in response to the NBS workshop and report on the standardization of computer security requirements, the Office of the Secretary of Defense conducted a series of public seminars on the DoD Computer Security Initiative. One result of these seminars was that the Deputy Secretary of Defense assigned to the Director of the National Security Agency (NSA) responsibility for increasing the use of trusted information security products within the Department of Defense.

National Computer Security Center

As a result of NSA’s new responsibility for information security, on January 2, 1981, the DoD Computer Security Center (CSC) was established within NSA to expand upon the work begun by the DoD Computer Security Initiative. The official charter of the CSC is contained in the DoD Directive entitled “Computer Security Evaluation Center” (5215.1).

Several years later, the computer security responsibilities held by CSC were expanded to include all federal agencies and the Center became known as the National Computer Security Center (NCSC). The Center was founded with the following goals:

§ Encourage the widespread availability of trusted computer systems

§ Evaluate the technical protection capabilities of industry- and government-developed systems

§ Provide technical support of government and industry groups engaged in computer security research and development

§ Develop technical criteria for the evaluation of computer systems

§ Evaluate commercial systems

§ Conduct and sponsor research in computer and network security technology

§ Develop and provide access to verification and analysis tools used to develop and test secure computer systems

§ Conduct training in areas of computer security

§ Disseminate computer security information to other branches of the federal government and to industry

In 1985, NSA also merged its communications and computer security responsibilities together under the Deputy Directorate for Information Security Systems (INFOSEC).

Birth of the Orange Book

The Center met an important goal by publishing the Department of Defense Trusted Computer System Evaluation Criteria (TCSEC), commonly known as the Orange Book because of the color of its cover. Based on the computer security evaluation criteria developed by Mitre,[13] and on such developments as the security model developed by Bell and LaPadula, this publication was distributed to government and industry experts, revised, and finally released in August 1983.

The Orange Book is the bible of secure system development. It describes the evaluation criteria used to assess the level of trust that can be placed in a particular computer system. It effectively makes security a measurable commodity so a buyer can identify the exact level of security required for a particular system, application, or environment. The Orange Book presents a graded classification of secure systems. It defines four broad hierarchical divisions, or levels, of protection—D, C, B, and A, in order of increasing security. Within each division, the Orange Book defines one or more classes, each defined by a specific set of criteria that a system must meet to achieve a rating in that class. Some divisions have only a single class, others have two or three. The original Orange Book was revised slightly and reissued in December 1985.

Using the Orange Book criteria, NCSC performed evaluations of products submitted by vendors for certification at a particular level of trust. Products that are successfully evaluated through the NCSC Trusted Products Evaluation Program (TPEP) are placed on the Evaluated Products List (EPL). Appendix C describes the Orange Book evaluation criteria (and also mentions some of the complaints about these criteria). The Orange Book is so pervasive that although the standards have transferred to its successor, the Common Criteria, Orange Book designations are often used synonymously with Common Criteria equivalents, and students research one by studying the other.

In the days since the Orange Book, the focus on common security has shifted to the Common Criteria. This set of guidelines describes parameters for secure computing and has a scale to rate the performance of an examined system against those parameters. Based on the European White Book, Common Criteria, in conjunction with numerous FIPS, is the basis of computer security in the United States today. Orange Book culture is so enduring, however, that you can barely speak of one without invoking the other. An overview of the interrelationship of these standards is contained in Appendix C.

Standards for Cryptography

During the 1970s, interest in a national cryptographic standard began to build within the government. The idea was to find an algorithm that could be used to protect sensitive unclassified government information (classified algorithms were already being used to protect classified information) and sensitive commercial data such as banking electronic funds transfers. In 1973, the National Bureau of Standards, part of the Department of Commerce, invited vendors to submit data encryption techniques that might be used as the basis of an encryptions algorithm.

Under the auspices of the Institute of Computer Science and Technology (ICST), later known as the National Computer Systems Laboratory, NBS organized a series of workshops for government and industry representatives to select a national encryption algorithm. The method eventually selected by NBS became known as the Data Encryption Standard (DES).

The DES was adopted as a Federal Information Processing Standard (FIPS PUB 46) in 1977 as the official method of protecting unclassified data in the computers of U.S. government agencies, and was subsequently adopted as an American National Standards Institute (ANSI) standard.

The DES consists of two components: an algorithm and a key. The DES algorithm is a complex, iterative process that is public information. This algorithm uses a secret value—the key—to encode and decode messages.

DES technology has been embedded in the products of many commercial products. Until 1986, the National Security Agency endorsed products containing DES-based algorithms. In 1986, NSA announced that it would no longer endorse such products. There was a substantial reaction to this decision by vendors, users, and other government agencies. Chapter 7 describes DES in greater detail and outlines some of the issues surrounding the use of the algorithm.

DES has now been cracked, both by special-purpose devices (not necessarily computers) constructed of microchips and by clusters of computers operating in tandem over the Internet. While the DES algorithm is likely to remain in use for some time (it’s still efficient in certain two-way voice encryption systems), cryptographic researchers have continued to work on the development of more advanced algorithms. A competition was held in the late 1990s to determine which encryption standard would become the Advanced Encryption System (AES). The winner of the competition was an algorithm called Rijndael. The “losers,” many of which were powerful encryption tools, are also enjoying success in the world today. Most are available as open source programs.

Through the Commercial Communications Security Endorsement Program (CCEP), government and industry representatives develop, test, and endorse new cryptographic products.

Standards for Emanations

As early as the 1950s, concerns began to develop about the possibility that the electrical and electromagnetic radiation that emanates from computer equipment (as it does from all electronic equipment) could be intercepted and deciphered.

It works like this: any time a current flows, magnetic fields form around it. Conversely, when magnetic fields change size or shape, they induce currents in nearby conductors. Finally, any voltage that exists on one side of an insulator has the ability to cause changes to a voltage on the other side of an insulator due to the coupling of charges. All put together, operating any device that uses electricity can create signals that are detectable elsewhere. Often you see this as a disturbance of some kind, such as the interference on a televised football game caused by someone operating a vacuum cleaner or blender nearby.

In an effort to counter this threat, the U.S. government established the first standard for the level of emanations that was acceptable for equipment used to process classified information in the late 1950s. During the 1960s and 1970s, as standardization efforts proceeded in areas of secure systems and cryptography, they also resulted in the refinement of the initial TEMPEST standard and the establishment of a program to endorse products that met the requirements of this standard.

The Industrial TEMPEST Program was established in 1974 with three main goals:

§ Specify a TEMPEST standard that sets allowable limits on the levels of emission from electronic equipment.

§ Outline criteria for testing equipment that, according to its vendors, meets the TEMPEST standard.

§ Certify vendor equipment that successfully meets the TEMPEST standard.

The National TEMPEST Standard, known as NACSEM 5100 (National Communications Security Emanations Memorandum 5100), was published in 1970. Much of the document was classified. This standard has been revised several times. The current standards family, NSTISSAM/1-91 (Compromising Emanations Laboratory Test Requirements, Electromagnetic) was published in 1971. It is superseded by NSTISSAM/1-92; NSTISSAM/2-91 (Compromising Emanations Analysis Handbook) was published in 1991; NSTISSAM/3-91 (Maintenance and Disposition of TEMPEST Equipment) was published in 1991, with certain augmentations published in 1995.

Government and industry representatives have worked together to set standards and to develop, test, and certify TEMPEST equipment. The U.S. government approves laboratories that evaluate TEMPEST products.

[12] Z. Ruthberg and R. McKenzie, ed., Audit and Evaluation of Computer Security, Special Publication 500-19, National Bureau of Standards, Gaithersburg (MD), 1980 (SN 003-003-01848-1).

Z. Ruthberg, ed., Audit and Evaluation of Computer Security II: System Vulnerabilities and Controls, Special Publication 500-57, (MD78733), National Bureau of Standards, Gaithersburg (MD), 1980 (SN 003-003-02178-4).

[13] G.H. Nibaldi, Proposed Technical Evaluation Criteria for Trusted Computer Systems, (M79-225), Mitre Corporation, Bedford (MA), 1979 (available from NTIS: AD A108832).

G.H. Nibaldi, Specification of a Trusted Computing Base, (M79-228), Mitre Corporation, Bedford (MA), 1979 (NTIS publication: AD 108831).

Computer Security Mandates and Legislation

Throughout history, new advances in the availability, processing, and transmission of information have inevitably been followed by new security methods, federal laws, and procedural controls. These are typically aimed at protecting information that’s considered to be essential to national security or other national interests.

In the 1970s and into the 1980s, national concerns about the Soviet interception of domestic communications intensified. Since the 1990s, that threat has diminished, but other nations, primarily those in the Middle East, but certainly Communist China and North Korea as well, have become threats. All this has led to a large number of security-related pieces of legislation, Presidential directives, and national policy statements. These fall into several categories:

Protection of classified or sensitive information

Legislation mandating computer security practices by federal agencies and contractors. The idea of this legislation is that organizations that process classified or sensitive unclassified government information must be careful to protect that information from unauthorized access.

Computer crime

Legislation defining computer crime as an offense and extending other regulations to cover thefts and other abuses carried out by computers and other new techniques. In addition to federal policies, virtually all U.S. states have enacted their own legislation prohibiting computer crime and abuse.


Legislation protecting the privacy of information maintained about individuals (e.g., health and financial records). Another consideration for computer privacy is the practice of merging records from multiple, seemingly benign databases into profiles that may reveal devastating amounts of information about an individual.

The Balancing Act

Although it may be a mischaracterization, much of the concern about computer security centers on the government. Data needs to be classified to avoid exposing sensitive information and the means used to collect it, or protected to avoid allowing unfriendly investigators to compile data and expose national weaknesses. In addition, the communications of terrorists and criminals can be a rich source of information to prevent or solve crime, but access to that information by law enforcement or military agencies can constitute an infringement of personal liberty (which is one of the principles upon which this country was founded). Thus government is both the protector and the cause of concern, and untying the hands of enforcement agencies that protect us while keeping their agents from scrounging secrets out of our trash is one of the enduring concerns of national computer policy.

National Security Decision Directive 145 (NSDD 145), entitled the National Policy on Telecommunications and Automated Information Systems Security, signed by President Reagan on September 17, 1984, had far-reaching significance in the world of computer security. NSDD 145 mandated the protection of both classified and unclassified sensitive information. It also gave NSA the obligation to “encourage, advise, and if appropriate, assist” the private sector. Because it gave NSA jurisdiction in the private sector, NSDD 145 was a controversial directive.

NSDD 145 required systems that handle classified information to be secured as necessary to prevent access by unauthorized individuals. It also required that systems protect sensitive information, whether it originated in the government or outside it, in proportion to the potential damage that disclosure, alteration, or loss posed to national security. Examples of sensitive information include productivity statistics; information that might relate to the disruption of public services (e.g., air traffic control information); and virtually all information collected by such organizations as the Social Security Administration, the Federal Bureau of Investigation, the Internal Revenue Service, and the Census Bureau (e.g., individual health and financial records). These provisions rankled private enterprise, which used such data for planning purposes. Post 9/11 concerns have caused the pendulum to swing in favor of guarding such information.

In an effort to clarify the meaning of sensitive information and better interpret the requirements for its protection, the National Telecommunications and Information Systems Security Publication 2 (NTISSP 2), “National Policy on Protection of Sensitive but Unclassified Information in Federal Government Telecommunications and Automated Systems,” was published on October 29, 1986.

NTISSP 2 defined sensitive information as follows:

Sensitive, but unclassified information is information the disclosure, loss, misuse, alteration, or destruction of which could adversely affect national security or other federal government interests. National security interests are those unclassified matters that relate to the national defense or the foreign relations of the U.S. government. Other government interests are those related, but not limited to the wide range of government or government-derived economic, human, financial, industrial, agricultural, technology, and law enforcement information, as well as the privacy or confidentiality of personal or commercial proprietary information provided to the U.S. government by its citizens.

NTISSP 2 applied to all government agencies and contractors. It described the general categories of information that might relate to national security, foreign relations, or other government interests. It instructed the heads of departments and agencies to determine what information is sensitive but unclassified and to provide system protection for that information when it is electronically communicated, transferred, processed, or stored.

Computer Fraud and Abuse Act

Issued in 1986, the Computer Fraud and Abuse Act (18 U.S. Code 1030, now called Public Law 99-474) prohibits unauthorized or fraudulent access to government computers and establishes penalties for such access. Anyone convicted under this act faces a fine of $5,000 or twice the value of anything obtained via unauthorized access, plus up to five years in jail (one year for first offenders). The act prohibits access with the intent to defraud, as well as intentional trespassing. For example, posting passwords to federal computers on pirate electronic bulletin boards is a misdemeanor under this act. Robert T. Morris, author of the infamous Internet worm, was the first person convicted under the Computer Fraud and Abuse Act (Section 1030 (a)(5)), setting a precedent for future cases.

There are complaints about the wording of the Computer Fraud and Abuse Act on both sides of the issue.

The frustration of the Justice Department in prosecuting espionage cases in which classified information has been obtained by computer has led the Department to try to change the wording of the Computer Fraud and Abuse Act. The current law says it’s a felony for anyone knowingly to gain unauthorized access to a computer and obtain classified information “with the intent or reason to believe that such information so obtained is to be used to the injury of the United States or to the advantage of any foreign nation.” The Justice Department wants to drop that clause. The revised law would simply require proof that the intruder obtained certain information, not that the information was delivered or transmitted to anyone else.

Amendments to the Computer Fraud and Abuse Act have been proposed in Congress to expand the act’s current government and banking focus to any systems used in interstate commerce or communications. The amendments would also change the orientation of the act from simple unauthorized access to the use of a computer system in performing other crimes.

On the other side of the issue, there have been complaints that the language of the Computer Fraud and Abuse Act is too general and can apply to anyone who writes or teaches about computer security. There have also been suggestions that the act should explicitly treat different types of offenses in different ways. At present, there is no clear distinction between people who use computers for hacking, for computer crime, or for terrorism.

As yet, no changes have been made to the language of the Computer Fraud and Abuse Act.

Computer Security Act

An important outgrowth of NSDD 145 was the development of the Computer Security Act of 1987 (H.R. 145), which later became Public Law 100-235. This act has expanded the definition of computer security protection and has increased awareness of computer security as an issue. It’s the closest thing the United States has to a federal data protection policy. The Computer Security Act, which went into effect in September 1988, requires every U.S. government computer system that processes sensitive information to have a customized computer security plan for the system’s management and use. In this, it mirrors the Common Criteria used in Europe. It also requires that all U.S. government employees, contractors, and others who directly affect federal programs undergo ongoing periodic training in computer security. All users of systems containing sensitive data must also receive computer security training corresponding to the sensitivity of the data to which they have access.

The Computer Security Act further defines sensitive information as information whose “loss, misuse, unauthorized access to, or modification of could adversely affect the national interest, or the conduct of federal programs, or the privacy to which individuals are entitled under . . . the Privacy Act” (described in the section "Privacy Considerations" later in this chapter).

This act gave NIST new responsibility for federal computer security management. It assigned to the ICST within NIST responsibility for assessing the vulnerability of federal computer systems, for developing standards, and for providing technical assistance, as well as for developing guidelines for the training of federal personnel in computer security-related areas. NSA was assigned a role as advisor to NIST regarding technical safeguards.

The Computer Security Act does not affect the protection of classified information. It also allows requirements to be waived if they disrupt or slow down the implementation of what’s considered to be an important federal agency mission.

Searching for a Balance

Following the adoption of the Computer Security Act, concerns were voiced by Congress, industry, professional groups, and the general public about the potential for abuse. These concerns focused on NSA’s role in the private sector—in particular, relating to the control of unclassified information. Banks and other data-intensive industries feared that the act would impose disruptive restrictions on their operations. Civil libertarians were concerned about infringements on personal privacy. These concerns culminated in a series of Congressional hearings during 1987. During these hearings, NTISSP 2 was rescinded (March of 1987), and a review of NSDD 145 was ordered. Debate continues about the appropriate role of government in protecting and mandating the protection of information (see the sidebar "Hackers’ Rights“).


Computer hackers have come under increasing scrutiny, to the point that it may be technically illegal to perform many legitimate research activities. Legislation did not leave too many holes for small independent researchers, and large firms and universities do not want to risk a technical violation of the law. As a result, current activity by hackers takes place in an underground movement.

This is actually a return to conditions that existed about a decade ago. In the wake of early 1990s law enforcement sweeps such as Operation Sun Devil, which seized 42 computers and 23,000 floppy disks in 14 cities, shut down several bulletin board systems, and made four arrests in the spring of 1990 (some resulting from publication of information about the BellSouth 911 emergency telephone service and other “hacker tutorials”), a group was formed to provide legal aid to those it said had been victimized.

Mitch Kapor (of Lotus and ON Technology fame) and John Barlow (author and Grateful Dead songwriter) banded together, in conjunction with some well-known New York and Boston law firms, to form a defense team known as the Electronic Frontier Foundation, which seeks to protect the civil liberties of legitimate computer users. It was the EFF that successfully cracked the DES encryption algorithm using a specialized hardware device even as government witnesses were testifying in committees and before Congress that DES would take centuries and millennia to crack, even with a fast computer. Obviously, a great debt is owed these pioneers. Without them, trust in DES would have been misplaced, and private information compromised.

Members of the team charged that in trying to protect the government, industry, and individuals from cracker attacks, government agents violated constitutional rights. They also warned that the intimidation of today could thwart the technology of tomorrow. Senator Patrick Leahy, Democrat from Vermont, insisted that lawbreakers should be punished, but commented, “We cannot unduly inhibit the inquisitive 13-year-old who, if left to experiment today, may tomorrow develop telecommunications or computer technology to lead the United States into the 21st century. He represents our future and our best hope to remain a technologically competitive nation.”

His plea went unheard. Harsh prison sentences seem to rule now, as public anger has started to rise against spam and other invasions of privacy. In one recent case, an attacker sent spam to hundreds of thousands of people using the return addresses of certain reporters at some Philadelphia newspapers. The attacker wanted to rant about how his favorite sports team was getting a raw deal. Unfortunately, when thousands of users who received the email logged on to protest, and when the out-of-date addresses on the attacker’s list started bouncing back as returned to sender, the newspaper workers whose email addresses had been spoofed started getting deluged with mail, none of which was rightfully theirs. The attacker was eventually caught and sentenced to numerous consecutive sentences, 471 years in total. Manslaughter can carry a sentence as light as 5 to 10 years. Obviously the sentencing structure is currently out of whack.

In 1990, NSDD 145 was revised and reissued as National Security Directive 42 (NSD 42). The new directive narrowed the original scope of NSDD 145 to primarily defense-related information.

Recent Government Security Initiatives

NSA, NIST, and DoD have all played an important part in developing computer security standards and in carrying out security programs. The exact balance of responsibility has not always been clear, and the boundaries continue to shift. Typically, NSA has had responsibility for the protection of classified military and intelligence information via computer security techniques, while NIST has been responsible for developing standards and for developing computer security training programs. At times, both NSA and NIST have claimed responsibility for safeguarding unclassified, sensitive information, though most recently this type of information has been in NIST’s bailiwick. Although standards for the evaluation of secure systems came about under NIST’s auspices, actual evaluations are performed by the National Computer Security Center, a part of NSA. Different pieces of legislation seem to shift the balance one way or the other.

As a consequence of NSDD 145, the balance of responsibility seemed to shift to NSA (only to shift back again to NIST with the Computer Security Act and the revision of NSDD 145). NIST’s Computer Security Program now encompasses a wide range of security activities, including developing and publishing computer security standards (in conjunction with organizations such as ANSI, ISO, and IEEE, described later in this chapter); conducting research in areas of security testing and solutions; and providing computer security training and support to other government agencies.

NSA and NIST have signed a memorandum of understanding about how they will cooperate on issues affecting the protection of sensitive unclassified information, mainly focusing on the implementation of the Computer Security Act. Their agreement is still subject to interpretation, and the balance continues to be a fragile one.

Modern Standards for Computer Security

In late 1990, NSA and NIST announced they were embarking on a joint venture that might result in new computer security criteria for computer procurements by federal agencies. The joint venture was also expected to take into consideration the European Community’s Information Technology Security Evaluation Criteria (ITSEC), requirements that are under consideration as an international security standard. The result of this collaboration is known as the Common Criteria, and it is the foundation of computer protection documents today.

Under funding by DARPA, the National Research Council published a report, entitled Computers at Risk, that expressed concern about the state of computer security in the United States, and made recommendations about the need for a more coordinated security structure The committee’s report also suggested the publication of a comprehensive set of what are known as Generally Accepted System Security Principles (GASSP), which would clearly define necessary security features and requirements.

GASSP and GAISP Overview

Creation of the GASSP began in response to Recommendation #1 of the Computers at Risk report. Originally carried by the International Information Security Foundation (IISF), the GASSP has drawn from an array of existing guidelines, such as those created by the Organization for Economic Cooperation and Development (OECD) and the United Kingdom Department of Trade and Industry. As a global initiative, participation and support have been gained from respected groups such as the International Information Systems Security Certification Consortium (ISC)2, the International Standards Organization (ISO), the Institute of Internal Auditors (IIA) and the international Common Criteria effort.

The Information Systems Security Association (ISSA) decided, with the concurring support of the IISF, to take on the leadership needed to finalize and promote this important body of work as the Generally Accepted Information Security Principles (GAISP).

In the wake of the attacks of 9/11, the security industry has seen a dramatic increase in awareness and participation from the U.S. federal government, with new initiatives such as the Department of Homeland Security and the Partnership for Critical Infrastructure Security. Security professionals have an opportunity to work with these efforts and establish self-regulations rather than permit government-mandated security policies and regulations to fill the perceived void.

The GAISP is a comprehensive guidance hierarchy that provides a globally consistent, practical framework for information security. The final body of the GAISP will provide three levels of guiding principles to address security professionals at all levels of technical and managerial responsibility:

Pervasive Principles

Targeting governance and executive-level management, the Pervasive Principles outline high-level guidance to help organizations solidify an effective information security strategy.

Broad Functional Principles

Broad Functional Principles are the building blocks of the Pervasive Principles and more precisely define recommended tactics from a management perspective.

Detailed Principles

Written for information security professionals, the Detailed Principles provide specific, comprehensive guidance for consideration in day-to-day information risk management activity.

Privacy Considerations

The ability to collect and manage information doesn’t necessarily confer the right to save, analyze, and publicize that information, but several recent attacks suggest that this is occurring. In one high profile case, an airline was approached and asked to turn over the records of millions of passengers who had purchased tickets for trips on the airline. Apparently the purpose was to combine the flight plans of customers with other data available commercially, such as reports from credit bureaus, and determine which fliers may fit the profile of a terrorist. This is feasible only if you can rapidly combine information from several different databases, and that, in the view of many, represents a massive invasion of privacy.

The concern that the compilation of more benign databases could provide potentially harmful information began at about the time commercial use of computers first accelerated. During the 1960s, the increasing availability of large-scale computers made possible for the first time the development and use of centralized, computerized databases. In 1965, recommendations were made for the establishment of a National Data Bank to serve as a central repository of all personal information gathered by federal agencies about U.S. citizens. This proposal awakened concerns about the computer’s potential for invading individual privacy. Extended and heated testimony was heard before the U.S. House of Representatives Subcommittee on the Computer and Invasion of Privacy. There was considerable national discussion about the potential for abuse of centralized databases and about the need for legal action to protect society against such abuses.

Since the dawn of the computer age, there has been tension between, on the one hand, the technologies that enable huge amounts of information to be stored and accessed with accuracy and efficiency, and, on the other hand, the right to personal privacy. In the case of the airline issue, the defense agency that requested the data even used a private company to merge the lists and perform the analysis. This circumvented laws prohibiting the government from forming such databases. Private industry, even as a contractor to government, faces fewer such restrictions. This issue is explored forcefully in Database Nation, by Simson Garfinkel (O’Reilly).


Computer security, originally envisioned to protect the denizens and equipment contained in the “glass houses” of computer centers has grown into a network of laws, standards, and best practices designed to promote the protection of the computer and its network from internal and external attack, as well as the misuse of its contents by owners and protectors alike.