What Are You Trying to Protect? - Crafting the Infosec Playbook (2015)

Crafting the Infosec Playbook (2015)

Chapter 2. What Are You Trying to Protect?

“You better check yourself before you wreck yourself.”

Ice Cube

Only when you know, and can describe, exactly what you are trying to protect can you develop an effective playbook and incident response program. You must have a solid understanding of what needs protecting. Starting with tools and technology is truly putting the cart before the horse. Remember that as defenders, we do not have the luxury of defining the attacks used against us. We can only decide what we believe is most important to protect and react when it is threatened. The attackers have their own ideas as to what’s valuable, but it’s up to us to determine where they are most likely to strike, and what’s at stake if we lose.

When we originally developed our playbook, some of our earliest requirements demanded that it enabled us to:

§ Detect malware-infected machines

§ Detect advanced and sophisticated attacks

§ Detect suspicious network activity

§ Detect anomalous authentication attempts

§ Detect unauthorized changes and services

§ Describe and understand inbound and outbound traffic

§ Provide custom views into critical environments

It’s impossible to determine your risk (and subsequently how to manage it) if you are not aware of what you have and what you have to lose. The risk of an unknown system, with no log information and not even a reasonable way to trace back to the host, presents a significant risk to the organization. Imagine a datacenter filled with a mishmash of servers and services, some potentially orphaned by erstwhile sysadmins and projects. If they are unknown to the IT and security teams, they are the perfect jumping-off point for attackers—whether there is valuable data on these systems or not. What better way to access internal secrets than through an unknown box on the same network as the target? If the security teams ever do catch onto the scent, how will they find out who owns it, and most importantly, will they be able to shut it down without adversely affecting some business process?

Whether it be crown jewels, government mandate, or simply basic situational awareness, the point here is that a failure to understand your own assets and risks is a recipe for a security disaster. When someone compromises a host and you cannot respond properly, nor report as to why the host’s information and ownership were unknown, you will be faced with hard questions to answer.

This chapter will cover what you can and should protect and what you’re obligated to protect. We’ll broadly discuss the basic risk management essentials as they apply to security monitoring, and provide solid examples to help you get started in developing your plan.

The Four Core Questions

When we set out to modernize our incident management process, we took a high-level view at what was and was not working in our operations. Rather than diving right in to solve the fun technical problems, we instead went back to the basics to ensure we had properly defined the problems we were trying to solve, and that we had answers for the most basic requirements of our charter to protect the security of our company. We distilled these problems and goals into the following four questions:

§ What are we trying to protect?

§ What are the threats?

§ How do we detect them?

§ How do we respond?

Answers to these four questions provide the core foundation to security monitoring and incident response. Ask yourself these questions now, and repeat them often. Throughout this book, we will help you answer these questions, but you must at least start with an understanding of what it is you are protecting. If you don’t know what to protect, you will most certainly suffer the consequences of successful attacks. While you can prevent some attacks some of the time, you can’t prevent all attacks all of the time. Understanding that there will always be a place for incident prevention, while also recognizing that not every threat can be blocked, ensures a pragmatic approach to detection and response.

There Used to Be a Doorway Here

The larger and more complex an organization’s network, the more overhead required to inventory, assess, and attribute assets on that network. Take the example of the company (most companies) that only has a general idea of what systems, applications, and networks they own, much less how they interoperate. In some cases, outsourced hosting or application providers may be overlooked or unknown if procured outside of approved protocol or policy, creating an even larger attack surface. A small startup grows larger and larger, eventually reaching the point of acquiring additional companies. Throughout the growth periods, there’s little time or tolerance to properly document the network changes and new systems and services, as leadership believes it impedes progress and doesn’t contribute to the overall bottom line. So although the company and profits grow, the network becomes so complex that understanding who owns what hosts and where they are located becomes insurmountable, invariably leading to problems in the future.

Why does this even matter? If the company is doing well, does it really make a difference if you don’t know where all the servers are? The fact remains that there is no perfect security. There are a finite number of resources available to protect your organization and its assets, and often it’s a battle to get what you need to simply cover the basics.

No one can protect everything and everyone all the time. It is called incident response after all. Certainly there are ways to prevent attacks, but there will never be a perfect defense for everything. We believe it’s all about finding a balance and prioritizing what’s most critical first with the resources available, and then adding additional layers as your monitoring program matures. Beyond that, it’s important not only to understand what’s most important to your organization, but also how to access it and who owns it. Some organizations are keen on delivering attribution details on successful attackers. However, it’s also important to attribute the victims of attacks as well for proper response and remediation. We’re fond of saying that detection without subsequent attribution is worthless. That is, if you find compromised systems and data but no owners, mitigating the threat quickly will be challenging, if not impossible.

InfomationWeek.com once published an article entitled “Server 54, Where Are You?”, which reported that a University of North Carolina server had been “lost.” Not only lost, but also literally concealed by drywall after a remodeling project. According to the article, “IT workers tracked it down by meticulously following cable until they literally ran into a wall.” Similarly, the University of California in San Diego found a long abandoned server in the physics department hidden above drop-down false-ceiling tiles. There are likely countless true tales of misplaced, unknown, rogue, or otherwise ambiguous servers and systems, not to mention pluggable computers designed for pentesting hiding in plain sight. Although these comical errors are easy to chalk up to poorly managed, under-resourced universities, these types of issues happen everywhere. Many organizations have experienced the more mundane issues of misattributed servers, abandoned IP addresses, unused segments, and geographical or physical layer challenges. How does this kind of thing happen, and what would happen if the lost hosts had been compromised? A hidden server hosting malware, a DoS attack, or worse, would be serious trouble for an incident response team—particularly if it was impossible, or nearly impossible to trace. There are logical solutions to cutting off attacks (e.g., ACLs, null IP routes, or MAC address isolation), but at some point the physical box needs to be located and remediated to get things under control again.

In short, whether it’s mom-and-pop small, a nascent startup, or enterprise size, every network has a history and an expansion story. If you plan to succeed, you need a fundamental lay of the land for any network you’re protecting.

Host Attribution

So how do you determine where a host is located? Without some type of host or network management system, it’s very difficult to track every useful attribute about your hosts and networks. Knowing what metadata to collect, how to keep it updated, and how to perform checks against it makes running any large network easier, with the added benefit of quick error or outage detection. The first step is to look for clues.

The more clues, or context, you have to help identify an asset on your network and understand its purpose, the better your chances of successful identification. “Tribal knowledge,” or simply having a history working at an organization can go a long way toward knowing the location and function of hosts. However, attribution data must be perpetually available to all analysts in the event that someone departs with the tribal knowledge and without leaving documentation for remaining staff. It’s important to have reliable systems of record available for your network. Useful attributes you should seek to collect include the following:

Location (Theater/Building/Rack)

State (On/Off/Retired)

IP Address

DNS Hostname

Priority

Description

MAC Address

NetBIOS/Directory Domain

Compliance Record

OS Name/Version

Priority

Applications

Network Address Translation (NAT)

Business Impact

Business Owner

Network Location/Zone

SNMP Strings

Emergency Pager

Address Lease History

Escalation Contact

Lab ID

Primary Contact

Function

Registration Date

Possibly the easiest place to find some of this information is in the inventory systems maintained by your network and system administrators. Centralized solutions like Nagios, IBM Tivoli, HP OpenView, and many other products or custom solutions can offer these types of host information and monitoring databases. These systems can often store names or contact information for the asset owner, information on the running system, or a description of the host’s purpose. Network, system, and lab administrators along with application developers might all maintain separate inventory systems. For every inventory system, the data must be reliable at any given point in time. An Excel spreadsheet, tediously and periodically updated by hand, will inevitably give way to stale data of no use during investigations. Getting a one-time dump of all known host information is only useful until something changes. Some of the concepts covered in Chapter 4 discussing data best practices apply equally as well to asset inventory systems. Getting the CSIRT access to these inventory systems, or at least the data in the systems, provides a gold mine of attribution information necessary for incident response and advanced event querying.

Bring Your Own Metadata

Some of the clues you can discover on your own. Additional attribute information and context come from infrastructure logs. Hosts with DHCP IP address reservations, VPN and authentication services (RADIUS, 802.1x, etc.), or network or port translation records (NAT/PAT) all include transient network and host addresses. Mining logs from these services can tie a host, and potentially a user, to a network address at a given point in time—a difficult requirement for investigating events. We’ve experienced entire investigations come to a dead end because we were unable to precisely attribute hosts at various points in time due to a lack of transient network logs, or more embarrassingly, because the timestamps were incorrect or in an unexpected time zone. The importance of standard time zones and proper time synchronization cannot be overstated, and the use of accurate Network Time Protocol (NTP) is highly recommended.

Network Address Translation (NAT) is of particular concern due to its prevalence and its masking effects on the “true” client source IP. Administrators rarely enable NAT logging either due to configuration complexity, log churn (i.e., too much data), or performance issues. Similarly, log data from a web proxy often contains only the source IP address of the web proxy itself and not the true client address. Fortunately, for most proxies, however, there are additional headers like Via: and X-Forwarded-For that you can append to all proxied requests to include both proxy and originating IP.

VPN and DHCP logging present their own challenges (although can also yield a great deal of reward) simply because of the rapid turnover of network addresses associated with dynamic addressing protocols. For instance, I may authenticate to a VPN server, drop my connection, reauthenticate, and end up with a completely new address. Or I may be walking from building to building, getting a new DHCP lease from each unique wireless access point I connect to on the way. The shift to hosted infrastructure, or “cloud,” brings additional challenges. Like NAT, not only are you setting up and tearing down network connections, but also entire instances of an asset, including all running processes and memory on that asset. Getting ahead of these problems is crucially important when chasing down a host for investigation.

Many of these attributes are not as important or even relevant to some desktop, end-user, or lab systems. However, it’s critically important to ensure that any and every host going into your datacenter, regardless of its purpose, has some details listed as to its owner and its function. Left to chance, there will be hosts that end up abandoned with no identifiable owner. Enforcing owner and other asset information tracking may seem like burdensome red tape, but it prevents things from slipping through the cracks and investigations reaching a dead end. Telling people they are accountable for a host will make them think twice about deploying it without the proper controls.

Minimum access policies for datacenters and other critical network areas need to include requirements for host attribution data. For example, we will not let a new host come up in the datacenter without being first entered into the correct systems of record. These requirements are particularly important in virtual environments where there is not a physical host for the “feet on the street” to examine, and a virtual server farm may contain thousands of hosts or instances. Virtual machine (VM) administrative software should contain VM attribution log data that can help identify a group or host owner, no matter the purpose of the VM.

A great example from an incident we worked on illustrates the importance of proper attribution, even in its most basic forms. Recently, Microsoft published a critical bug leading to remote exploitation, and we immediately set out to ensure that our enterprise was patched and prepared. We had a very quick patch deployment, including coverage for almost all impacted hosts in the company, except for about 10. Out of thousands of Windows hosts that successfully applied the critical patch, 10 remained vulnerable and on the network. Looking through the systems of record at the time, we knew the hosts were obviously running Windows, but there were no clues as to who owned them, what they were for, and why they were not yet patched. To add to the confusion, the hosts were geographically dispersed and had inconsistent hostnames. Finally, after some major digging (ultimately a switchport trace to one of them), we discovered that they were audio/video control panels used in some of the conference and briefing rooms. These devices ran embedded versions of Windows that we were unable to patch through the normal mechanisms and required “high-touch” local support.

Eventually, we tracked down the vendor who had issued a patch to allow a new update, and even found the contractors responsible for their maintenance and required them to update the panels. Without a great deal of digital information, we used basic detective work to find the owners. Had a more solid metadata and contact requirement been in place, we would have been able to notify the proper support group immediately and have the hosts patched as soon as the vendor supported it, minimizing their potential downtime.

Network and security administrators have plenty of tools available for logically tracking down hosts like traceroute, Address Resolution Protocol (ARP) and address tables, Network Mapper (Nmap), Cisco Discovery protocol (CDP), and many others. However, as these anecdotes illustrate, it’s very possible that hosts can simply be forgotten. How can you protect your network and business if you don’t even know what systems there are to protect?

Identifying the Crown Jewels

When investigating the question for our organization of What are we trying to protect?, we discovered it was:

§ Infrastructure

§ Intellectual property

§ Customer and employee data

§ Brand reputation

The infrastructure equates to all the hosts and systems running on the corporate network, and the network itself. Protecting the infrastructure means ensuring that the confidentiality, integrity, and availability of the hosts, applications, and networks that underpin our organizational processes are protected. In our case, the intellectual property really refers to source code, current and future business and financial practices, hardware prototypes, and design and architecture documents. A data loss incident could result in losing credibility as a security solutions vendor. The same goes for brand reputation, which means a great deal in a competitive industry.

Many of these topics may be of concern to your organization as well. However, depending on the industry, there may be additional items to protect. Healthcare systems, for example, demand strict privacy protections and a solid audit trail of all patient information. Credit card payment systems may have additional monitoring requirements to ensure financial data isn’t exposed. Financial and banking systems have additional controls in place to monitor for fraudulent transactions. Requirements can and will be dictated by industry standards, government regulations, and accepted best practices.

Regardless of industry, you can try to determine your own crown jewels by:

§ Focusing on the applications and services that provide your critical infrastructure

§ Deciding which data would be the most deleterious to lose externally and where it’s stored

§ Knowing which systems have the highest impact to ongoing operations if compromised

Make Your Own Sandwich

One of the very first assignments in an introductory computer science course is: write an algorithm to describe how to make a peanut butter and jelly sandwich. The idea being that you already know how to make one, but you have to teach the computer how to do it. Initial attempts to describe the algorithm verbally are usually horribly incomplete, and would never lead to an accurate sandwich designed by a software program. There are some basic assumptions, of course, like the computer knows how to access the necessary inputs (peanut butter, bread, jelly, knife), but how to actually make the sandwich is what separates the brain from the computer. It sounds like a simple task, but in reality it is far from it. Humans can quickly recognize relationships, inferences, or historical information, and take calculated risks based on reasonable assumptions. Even if you had never made a sandwich before, you would quickly figure out that the peanut butter and jelly are somehow applied to the bread. A computer will only do exactly what you tell it to do—nothing more, nothing less. In fact, describing the algorithm to make a sandwich to the computer is quite lengthy and complex.

In determining how to figure out what to protect on your network, we are giving you the algorithm, but it’s up to you to provide the inputs. We can’t possibly predict or infer what is worth protecting and what costs are justified in doing so for your environment. What we can do, however, is guide you to this understanding on your own. Answering the four questions posed earlier is the first step.

So how did you answer the What are we trying to protect? question posed at the beginning of the chapter? Hopefully, at this point, you’ve realized that your organization, along with most every other one, has something worth protecting, whether it’s a physical product, a process, an idea, or something that no one else has. You, as the incident response or other security team, are tasked with protecting it. If someone stole the top-secret recipe for your famous soft drink, wouldn’t the thief or anyone to whom they sold the secret be able to reproduce it at a potentially lower cost, thereby undermining your profits? Extend the recipe metaphor to things like software source code, ASIC and chip designs, pharmaceutical methods, automotive part designs, unique financial and actuarial formulae, or even just a giant list of customer data, and there are plenty of things to lose that could devastate a company.

Start with the obvious and move on to the more esoteric. If all your patient records are stored in a database, by all means you should log and audit all database transactions. If your software source code resides on multiple servers, be certain you have thorough access control and audit logs to prove precisely who accessed what data and when. If your proprietary formulas, recipes, or designs reside on a group of servers, you should have as much accounting as possible to make sure you can understand every transaction. If you’re in retail, your datacenters and financial systems are critical, but don’t forget the importance of points of sale (POS) at each location. Malware running on the POS systems skimming customer payment cards and personal information have proven disastrous for many organizations. For your most critical assets, you should be able to answer whether data left the company’s boundary, either through a long-running encrypted session to a remote drop site, or if data was simply copied to a CD or a USB drive and carried off premises. It’s easier said than done, and there are many challenges that can make nonrepudiation very difficult.

Enabling collaboration often comes at the cost of less access control, although the expectation of keeping private data private falls squarely on the shoulders of the security architects and the incident response team. Using the source code example, if there are various business units all working on a similar project that shares code libraries, it’s possible that you’ll have to permit broader access than you’re comfortable with. The same goes for researchers from different universities or associations. Good security often comes as a double-edged sword. Placing onerous controls on a system can absolutely lock it down, but if it’s unusable to its operator, what good is it? Striking a balance between business need and security is one of the more difficult problems to solve in the security world, and is an ever-present struggle.

We’ll get into risk tolerance a bit more later, but understand that even though you may know what to protect and how to tighten controls to protect it, there will always be areas in which the security posture must be relaxed to enable progress, innovation, and usability. The most important thing to remember, despite any relaxed controls, is that you need to understand where the most valuable data lies, whether production, development, disaster recovery, or backup, and have a solid understanding of who accessed it, when, and from where.

More Crown Jewels

When considering the “crown jewels,” don’t restrict yourself to only data, hosts, or network segments. Consider an organization’s executive employees, finance and business development leaders, engineering leaders, or system and network administrators.

These high-value targets have access to data interesting to hackers:

§ Executives likely have access to financial or competitive information including mergers, acquisitions, or profit data that could be leveraged for trading fraud.

§ Engineering leaders have schematics, diagrams, and access to numerous projects that could be stolen or modified by attackers.

§ System administrators essentially have the “keys to the kingdom,” and a successful attack on them could lead to catastrophic problems.

As such, it’s important to focus specialized tool deployment and monitoring efforts on those groups of individuals. Beyond typical malware and policy monitoring, you may consider additional monitoring software on the high-value targets and more options for quick remediation. Different groups access different systems and types of data, so having an understanding of their roles and typical operations will augment this more focused monitoring. Forcing tighter controls at the system and network layer require attackers to become more creative to achieve their goals.

Despite the best security awareness efforts, social engineering rarely fails to work except against the most savvy (and sometimes lucky) personnel. Think phishing attacks against those who have the most to lose (or steal from). In one example, attackers successfully took out DNS services for the New York Times, part of Twitter, and other high-profile websites after phishing domain administrators at those organizations. If attackers are really dedicated, and are either incapable of or have failed at good social engineering once all the old tricks are done, they might employ “watering hole” type attacks. This is where attackers compromise websites that are commonly visited by their targets in the hopes they will compromise at least one. Attackers are crafty and will find a way to exploit either software or human vulnerability. To combat the classic attacks, hopefully you have a layered endpoint protection plan for your entire organization, including host-based intrusion prevention, antivirus, and remote forensic capabilities. If you don’t, these high-profile or high-value individuals and their devices would be a good place to start. From a monitoring perspective, you might analyze the plays more frequently, have a lower tolerance threshold for risky activity, or require an expedited escalation process.

Low-Hanging Fruit

After focusing attention on your crown jewels, don’t forget about the rest of the organization. Despite the increased risk, your high-value assets account for only a small portion of your entire infrastructure. Mature organizations will have InfoSec policies specifically crafted to meet their business requirements. Culture, risk acceptance, past problems, legal requirements, any government regulation, and business relevance generally determine your organization’s policies. Explicitly defining what is allowed or disallowed provides the policy backing required to justify proper security monitoring and incident response. When technological limitations prevent enforcement of the policies, you’ll need some sort of monitoring to determine if and when that policy has been violated.

Common IT policies adopted by most organizations that affect security include:

§ Acceptable Use Policy (AUP)

§ Application Security

§ Network Access

§ Data Classification and Protection

§ Account Access

§ Lab Security

§ Server Security

§ Network Devices Configuration

Directives in these policies can seed your playbook with basic detection strategies to support your enforcement. For example, an AUP might prohibit port scanning or penetration testing. Lab policies may require encrypted authentication protocols, mandatory usage of web proxies, or basic system hardening practices. Network device policies may forbid certain protocols or require encrypted communications. Each of these specific types of network activity can be detected and reported against. Similar to security policies, organizations often maintain standards documents that specify additional requirements. Host hardening standards, system and application logging standards, and other technical guidelines all help define specific controls that can be audited or monitored.

Standard Standards

Once properly interpreted, regulatory compliance standards can be another source of detection ideas for your playbook. Too many organizations minimally adhere to the letter of the law to satisfy controls, while implementing incomplete and ineffective solutions that provide little value for detection. We like to refer to this as “checkbox security.” Essentially, you are only checking a box in a list of requirements rather than truly securing your environment. This compliance-driven approach may satisfy the auditors, but it will not keep your data safe, and can ultimately backfire when a real incident occurs. Regardless of whether your organization is subject to regulatory overheard like the “Payment Card Industry Data Security Standard” (PCI DSS, or simply PCI), the Health Insurance Portability and Accountability Act (HIPAA), or the Financial Services Modernization Act (FSMA, or the Gramm–Leach–Bliley Act, or GLBA), like basic IT policies the intent, or spirit, of these standards can be turned into actionable objectives for your playbook. If passing an audit is your main (although misinformed) concern, then having a playbook (and handbook) in place that shows how incidents are handled if and when they occur will also go a long way toward showing due diligence despite any boxes you have checked.

Each high-profile standard has its own requirements and idiosyncrasies that go beyond the scope of this book. However, it’s worth highlighting how the main ideas behind certain portions of the standards can be used to determine what you should protect, and in some cases, actually how to protect it. The Cloud Security Alliance’s guidelines are a good example of a measurable policy. Among other things, they describe various controls suggested when using cloud computing that are germane to most organizations, regardless of how they choose to host their systems and information. Additionally, they have mapped similar controls from many different regulatory compliance standards into a single Cloud Computing Matrix (CCM).

Table 2-1 highlights a few example specifications from the CSA CCM that can serve as best practice ideas for security monitoring and play creation. For example, the controls suggest detecting some Layer 2 network-based attacks like MAC spoofing, ARP poisoning, DoS attacks, rogue wireless devices, and higher level mitigation capabilities.

Control domain

Control ID

CSA control spec

Infrastructure & Virtualization Security

Network Security

IVS-06

...Technical measures shall be implemented to apply defense-in-depth techniques (e.g., deep packet analysis, traffic throttling, and packet black-holing) for detection and timely response to network-based attacks associated with anomalous ingress or egress traffic patterns (e.g., MAC spoofing and ARP poisoning attacks) and/or distributed denial-of-service (DDoS) attacks.

Infrastructure & Virtualization Security

Network Security

IVS-12

... The capability to detect the presence of unauthorized (rogue) wireless network devices for a timely disconnect from the network.

Datacenter Security - Secure Area Authorization

DCS-07

... Ingress and egress to secure areas shall be constrained and monitored by physical access control mechanisms to ensure that only authorized personnel are allowed access.

Identity & Access Management

Third-Party Access

IAM-07

The identification, assessment, and prioritization of risks posed by business processes requiring third-party access to the organization’s information systems and data shall be followed by coordinated application of resources to minimize, monitor, and measure likelihood and impact of unauthorized or inappropriate access. Compensating controls derived from the risk analysis shall be implemented prior to provisioning access.

Table 2-1. CSA Cloud Computing Matrix

Risk Tolerance

Earlier, we touched on how risk awareness plays a big role in determining what to protect. An in-depth discussion of all the facets of risk management goes way beyond the scope of this book. Yet a brief discussion is unavoidable as risk management is directly tied to understanding your network and how to defend it. Fundamentally, the question to ask yourself is, what do I have to lose? Knowing what to protect and what you have to lose represent the first steps in dealing with risk management and building an effective security monitoring and incident response program.

Before you can get into all the risk handling methods like avoidance, transfer, mitigation, and acceptance, you have to know where the important systems and assets are located and what could happen if they were negatively impacted by an InfoSec breach. ISO 31000:2009 details how to manage risk and how to respond.

Risk treatment can involve:

§ Avoiding the risk by deciding not to start or continue with the activity that gives rise to the risk

§ Taking or increasing risk to pursue an opportunity

§ Removing the risk source

§ Changing the likelihood

§ Changing the consequences

§ Sharing the risk with another party or parties (including contracts and risk financing)

§ Retaining the risk by informed decision

Connecting a computer to the Internet creates a risk. That is, if it’s reachable by an attacker, it’s likely to be attacked. Providing access to a computer system to more than one person increases the risk that something can go wrong, and the more people with accounts, the higher the risk becomes. Risk is proportional to the amount and level of access you provide. Really we’re just talking about the principle of least privilege, which refers to allowing a user access only to the data and tools required to fulfill their duties (rather than mass privileges per team or department).

Taking a cue from the ISO 31000:2009, you can “change the likelihood” of a problem by keeping tighter access control. Having tighter access control requires you to know who can log in (and who has logged in), when, from where, for how long, and why. If you don’t know where your important systems are, and you don’t know who is logging in, you’ve already increased your risk profile substantially. What we are saying goes a bit beyond ISO 31000, in that you must not only focus on the likelihood or prevention of a risk, but also on having an awareness of risk in your organization.

Can I Get a Copy of Your Playbook?

All this is to say that there is no exhaustive rubber-stamp approach to defining everything you need to protect. You should strive to make the best effort with the information you have, as it’s the best way to inform your monitoring strategy. Again, you cannot begin to define your playbook strategy until you have a solid understanding of what is most important to protect. Our playbook is unique to our organization, as your playbook will be to yours. You will have different answers to the question of What are we trying to protect?, and while we wrote this book to help you develop your own playbook, only you can answer the four core questions. Like ours, the plays in your playbook help you protect the unique environment that you’ve been charged to monitor.

Chapter Summary

§ You can’t properly protect your network if you don’t know what to protect.

§ Define and understand your critical assets and what’s most important to your organization.

§ Ensure that you can attribute ownership or responsibility for all systems on your network.

§ Understand and leverage the log data that can help you determine host ownership.

§ A complex network is difficult to protect, unless you understand it well.