How to Stay Relevant - Crafting the Infosec Playbook (2015)

Crafting the Infosec Playbook (2015)

Chapter 11. How to Stay Relevant

“Who controls the past, controls the future...”

George Orwell

In 1983, the first mobile phone became available for consumer purchase (Figure 11-1). More than 30 years later, there are over seven billion wireless subscriptions globally, and almost half of the human population owns at least one mobile phone. Anyone in the early 1980s thinking about the first mobile phones would have difficulty imagining how much the tiny portable devices (now used everywhere by most people, for a significant part of their day) would permanently change human society and culture. No longer is a phone just a way to speak with another person. You have a constantly available record of your communications with anyone in your address book or otherwise. It is a news source, a radio, a television, a camera, a GPS, and a means of interacting with your money, to name only a few uses. Civilization is rapidly evolving toward ubiquitous computing, and discovering all the challenges that come with it—not the least of which are privacy and information security. The mobile phone phenomenon is one of many technology shifts throughout history from which we can learn, in order to reasonably anticipate future trends, problems, and challenges.

DynaTAC 8000X circa 1984

Figure 11-1. DynaTAC 8000X circa 1984

Computing trends are often cyclical, coming into, going out of, and coming back into favor. For instance, computing resources were previously centralized in shared mainframe computers. As hardware prices (and sizes) shrunk and new computing models developed, organizations moved toward on-premise solutions. Eventually, virtualization was able to scale to a point where most organizations have reverted to a centralized infrastructure, allowing a third party to manage virtualized network, compute, and storage layers. From a security monitoring perspective, each environment presents its own challenges. Though the variables and environments may change, a repeatable process of identifying what threats to monitor and how to detect and respond to those threats must be well founded in your security monitoring methodology. The medium and hardware may have changed, but many of the attacks, motives, and asset types you must protect have not. The playbook methodology helps you keep up with the pace of change. Technology and environments are only variables in the overall approach. As times and trends change, the playbook remains the framework you need to evolve your security monitoring and incident response processes. By reflecting on the past to prepare for the future, defenders can ready themselves and their networks for inevitable attacks.

In this chapter, we’ll discuss some of the social and cultural components that drive technology change; how the expansion of technology throughout our daily lives affects security response; and how the playbook approach keeps your incident response and security monitoring processes relevant in the future.

Oh, What a Tangled Web We Weave, When First We Practice to Deceive!

If you are reading this text, chances are that technology is a significant influence in your life. It’s even likely you’re reading this on a computing device, be it a laptop, phone, or other ereader. At the same time, devices around you are connected through cellular networks, local wireless IP networks, personal area networks, Bluetooth connections, and any number of other radio protocols. We use these connected technologies to communicate with others via audio, video, and text, and to consume media, get directions, order and pay for food and other items, access our banks, lock our doors, change environmental settings in our homes, play games, meet new friends, and countless other daily tasks that are now commonplace. As networks continue to grow, the potential information they can maintain and create expands. Look no further than Metcalfe’s law, which concludes that the value of a telecommunications network is proportional to the square of the number of connected users of the system. In other words, the more people that are interconnected on a network, the more valuable and desirable that network becomes to them.

Network connectivity is a foundational component to these activities. So foundational, in fact, that the President of the United States decreed the Internet be protected as a utility, as common and necessary to the people as water or electricity. Just as Moore’s law predicts the growth of computing power in hardware, Edholm’s Law postulates that eventually network bandwidth will reach a convergence between all current methods of network access (wireless, mobile, and local area networks). Everyone with a networked device can transmit data from anywhere at any time, regardless of their connection type. This is one of the main benefits of layered networking: no matter what layer you change, the other layers still function. With regards to user experience, watching a video online doesn’t change whether you are connecting through IPv4 or IPv6, or if your frames are forwarded over LTE, IEEE 802.3, or 802.11 links. More people with more devices only increases the network bandwidth necessary to handle the additional traffic, which in turn stimulates more innovative networked applications. However, additional networked users are not the only driving force in the throughput supply and demand equation. When bandwidth was so low that we could only transmit text, the idea of transmitting a decent resolution image did not occur to most people. When bandwidth allowed acceptable transmission of images, the idea of sending video didn’t occur to most. The past has demonstrated that every time you try to satisfy the needs of today, you end up enabling the technology of tomorrow. In the process of increasing bandwidth and access, you’re actually enabling the exponential growth of ideas and new uses for the technology. Always keep in mind, though, that higher throughput demands that monitoring devices and storage can keep pace with the additional traffic and log data.

Cheaper disk storage and faster network throughput also creates a ripe environment for online backup and file hosting services. Local backups to physical storage can save you headaches in the event of data loss. But during a catastrophic event where even your local backups are destroyed, hosted services allow you to retrieve your data and files over the Internet. Naturally, entrusting your data to someone else assumes a certain loss of control in the event of a data breach. As a consumer of the service, you expect that the hosting company secures the infrastructure on which your data resides.

The influx of technology in daily lives has had and will have multiple effects on the attack surface. The first is an increase in the scale of available devices. Increasing markets for mobile devices, networked automobiles, smart meters, wireless light bulbs and lamps, and wearables will add millions of nodes to the Internet, many of which will require constant connectivity. Industrial control systems and municipal systems like traffic monitoring and utility measurement also add to the mix of network-connected devices. Logistical industries like shipping, freight, and trucking rely on Internet connectivity to track, plan, and reroute their shipments. In addition to the sheer number of nodes, each device will have its own network stack and common and custom applications. Applications will invariably have bugs, and bugs introduce vulnerabilities that an attacker can exploit. As with the current vulnerabilities on your network, you’ll need to identify, detect, and mitigate issues resulting from these newly networked devices. By applying the playbook methodology to the new attack surfaces, you will identify different or new log sources, remediation processes, and mitigation capabilities to support the monitoring and incident response that results from the new devices.

The increase in devices will also affect attacker motivations. We’ve seen an explosion of growth in the criminal hacking and malware “industry” over the last decade because bad guys have finally started figuring out ways to monetize their exploits (pun intended). It used to be that virus writing was a digital version of graffiti, and provided a way to show off your skills and gain underground credibility. Now that there is real money in it, the hobbyist aspect no longer dominates, and monetary rewards attract more profiteers and criminal organizations. Any computing resource that can be exploited for money will be. There have always been ways to make money with CPU power, but cryptocurrency made the link so close that it became extremely easy to monetize the resource. Online advertising and syndication networks made it easy to monetize network connectivity via click fraud. Always-on networking also made DoS as a service easy (e.g., shakedowns via booter services). The rise in mobile devices led to abuse of premium SMS messaging services legitimately used for things like ringtone downloads or mobile payments. The point is, any time a link between a resource and a way to monetize that resource is made, bad guys will find a way to fill that niche.

Allowing digital devices to control the physical world in more ways only adds to the potential attack surface. Clever hackers are demonstrating how to steal cars over the Internet and clone RFIDs or other tokens for additional thefts or impersonation. Attackers have already compromised and controlled trains, buses, and traffic control systems. Imagine losing control of a building’s power, fire suppression, or heating and cooling systems. Critical industrial controls modified by attackers might lead to serious consequences. How long would it take for your datacenter to melt down after the air conditioning has been disabled? How long until your nuclear plant runs out of fuel and shuts down after your centrifuges explode?

The Rise of Encryption

Law enforcement agencies have forever been interested in the dual-use nature of encryption. As a means of protecting information and communications, it has practical applications for everyone from governments and militaries to corporations and individuals. As a means of evasion and obfuscation, it provides a sense of security for miscreants. Governments have historically tried to regulate the use, distribution, and exportation of cryptographic technologies, in some cases labeling encryption algorithms as “munitions.” The prohibited publishing of encryption techniques has even been challenged in American courts on the basis of free speech. On a global scale, the Wassenaar Arrangement is a multinational agreement that aims to regulate the export of dual-use technologies, including encryption.

In some cases, governments and law enforcement agencies have influenced and/or infiltrated encryption development due to fears of impotency in cases of crimes with computer-based evidence.

NOTE

A fantastic example of government interference in encryption technology was the Clipper chip. This chipset designed for encrypted telecommunications included a key escrow system with an intentional backdoor. The backlash was harsh.

The U.S. Federal Bureau of Investigation (FBI) is suspected to have convinced Microsoft to leave some investigative techniques available in its Bitlocker full disk encryption software. The Dual Elliptic Curve Deterministic Random Bit Generator (Dual_EC_DRBG) was also backdoored by the NSA to allow for cleartext extraction of any algorithm seeded by the pseudorandom number generator. The Dual_EC_DRBG algorithm has a curious property in that it’s possible to have a secret key that makes the algorithm trivially breakable to anyone with the key, and completely strong for anyone without it. The NSA never mentioned that this secret key backdoor capability existed, but someone in the public eventually found that the algorithm could have this “feature.” The NSA was even able to push the American National Institute of Standards and Technology (NIST) to standardize it even with this as public knowledge, and the International Organization for Standardization (ISO) also eventually standardized the algorithm. It wasn’t until the leaked classified documents came out that there was essentially proof that the NSA intentionally designed the algorithm in this way.

Law enforcement is concerned with the use of encryption because they lose another technique for collecting evidence. Consider this quote from FBI Director James Comey’s remarks at the Brookings Institute on default encryption on smartphones:

Encryption isn’t just a technical feature; it’s a marketing pitch. But it will have very serious consequences for law enforcement and national security agencies at all levels. Sophisticated criminals will come to count on these means of evading detection. It’s the equivalent of a closet that can’t be opened. A safe that can’t be cracked. And my question is, at what cost?

False equivalency and false dilemma fallacies aside, the FBI director is highlighting the double-edged sword of encryption and all technology in general. With more encryption can come better privacy, but potentially less overall security. The unfortunate caveat is that not all encrypted communications or data are completely innocent and cannot be known if everyone is to benefit from mobile phone encryption. In general, technology moves faster than law enforcement can adapt. Evidence is unavailable when agencies are unable to defeat digital protections. In those cases, other tried and tested investigative techniques and police work can still ferret out tangible evidence.

The proliferation of the Internet and its millions of networked devices, along with a propensity, if not incentive, for storing personal data on Internet-hosted systems, has set the stage for potentially disastrous data loss. Massive data breaches and leaks from well-known corporations and organizations have had a profound effect on the average Internet user. An educated user base has demanded an increased usage of encryption for personal means, and the expectation that personal data is protected against criminals, governments, and military alike. People want encryption now because their whole lives are online. We expect that transferring money from credit cards or mobile phones should be encrypted and secure. We expect that information we believe to be private should be kept private, and more so, that we should have control over who can access our information.

Encrypt Everything?

The specter of pervasive encryption has kept some security monitoring professionals from sleeping at night. Having all files and network transmissions encrypted to and from attackers seems like a nightmare scenario that yields little fruitful investigations. After all, if you can’t see precisely what’s in the traffic leaving your organization, how can you know for sure what might have been lost? In the security monitoring context, there are only two practical options for handling encryption: intercept, decrypt, inspect, and re-encrypt (known as man-in-the-middle, or MITM), or ignore encrypted traffic payloads. If MITM is unacceptable or impossible, there is still plenty of data to go around. Metadata from network traffic and other security event sources can create additional investigative paths and still solve problems.

Recall the Conficker worm that’s likely still running through the unpatched backwaters of the Internet, impotent and headless after numerous, coordinated takedown efforts. The worm encrypted its payloads, eventually to key lengths of 4096 bits in later variants eventually leading to millions of dollars in damages for many organizations, including military and government. It also generated a random list of domains to retrieve for the bot check-in component with a domain name generation algorithm (DGA). This last component (among others in the C2 protocol) is detectable with IPS, or even with web proxy logs or passive DNS (pDNS) data. Conficker also uses a UDP based peer-to-peer protocol communication that’s easily identifiable with IPS or other monitoring tools. The encrypted contents of the Conficker payload are irrelevant as long as you can detect its traffic patterns on the network and shut it down.

Correctly deploying end-to-end encryption (E2EE) is difficult, which can leave unencrypted data at risk to attack. Consider a point-of-sale (POS) terminal. For full E2EE, at a minimum, data (i.e., credit card information) would need to be encrypted on the card itself, at the hardware terminal scanning the card, the POS application processing the transaction, at the disk level for any locally stored artifacts, at the network transport layer back to a centralized POS server, and at the data storage layer in the centralized POS system. It’s a difficult enough process that U.S. retailer Target suffered a $148 million loss in part as a result of hackers scraping unencrypted credit card transactions from memory on their POS terminals.

For the same reasons that it’s difficult to implement encryption in a corporate setting, it’s difficult for attackers to do so in their software and infrastructure. Advanced adversaries and campaigns often encrypt C2 communications (e.g., screenshots, keylogger data, etc.), but less sophisticated attacks rarely encrypt command and control. This fact can lead to a detectable anomaly in and of itself. The cost of implementing and maintaining encryption outweighs the profits in the malware industry. If phishing and malware campaigns are highly lucrative without running complicated PKI infrastructures, or running and supporting more advanced cryptographic algorithms and key management systems, why would criminals bother? If an attacker requires confidentiality, then they should encrypt. If the goal is to not be detected, encryption is largely unimportant.

Some attackers use encryption not to protect their own data, but to block access to yours, as discussed in Chapter 3. The ransomware Trojan Cryptolocker (among others) uses both AES encryption and 2048-bit RSA keys to encrypt victim files it holds hostage.

Catching the Ghost

If you monitor an organization’s network, you will eventually need to deal with encrypted data on the wire or “at rest” on a host. If you deploy a web proxy, you’ll need to assess the feasibility in your environment of MITM secure HTTP connections. From a monitoring perspective, it’s technically possible to monitor some encrypted communications. From a policy perspective, it’s a different story. Does your Internet access policy allow communications to external email, banking, or social media sites that may inadvertently MITM your user’s personal credentials? Depending on your organization’s network usage policies, you may be required to whitelist certain sites from decryption. For performance reasons, it can be in your best interest to inspect only encrypted traffic streams to sites or applications with which you do not have an implicit trust.

Remember that even if data cannot be decrypted on the wire, all hope is not lost. There are numerous metadata components such as IP addresses, hostnames, TCP ports, URLs, and bytes transferred that can yield invaluable clues to an investigation. Intrusion detection may not be able to record the payloads, but if the packet structure from a particular malware campaign is predictable or expressible, the IPS can detect or block the attack. Metadata like NetFlow can provide context to the communications. Agents can be installed to decrypt network communications directly on the host. DNS queries can provide further investigative clues for you to trace to transitive closure. System, network, or application logs can identify different portions of the encrypted communications. CSIRTs have gone for years without full packet inspection and still discover security incidents.

As described in Chapter 4, metadata and log mining can still uncover the details that can be used to resolve problems, even if you can’t read the full contents of a packet. It’s always nice to have the full breakdown of all the communications, but even without encryption, having that data is often not available simply due to storage restrictions. Triggers can still fire for suspicious sequences of metadata, hosts, and applications, and people can be profiled by their data patterns and their outliers.

Technology evolves at a tremendous pace, yet we’re still able to keep up and respond. Undeniably, technology will continue to evolve, but there is no reason to believe it will change or will be capable of changing in such a way that monitoring and responding to security incidents is impossible. Remember that even as technology and IT trends change, the playbook approach holds fast as a reliable framework to plug in new variables and inputs with consistent results.

TL;DR

There are millions more computers on the way, along with millions of new applications, services, and technological phenomena. Along with the future generations of computers and technology, there will be more connectivity, more data, more capability, and more encryption and obfuscation. However, as long as there are computer networks, there should always be ways to monitor them, regardless of how big they become. As long as there are applications, there will be the possibility for helpful log data. In the end, security investigations boil down to asking the right people the right questions about the right data, and then analyzing any logs you can get for evidence. Even if the content of log data isn’t readable, the data about the data will certainly exist.

Staying relevant in InfoSec not only means knowing how to defend what you have, but also knowing how to predict what’s coming. Technology clearly moves in trends based largely on computing capabilities, and given the infinite ingenuity of human innovation, things can change quickly. Talk to most people that have been in IT for years, and they can all tell you about the things they used to be experts in but are now irrelevant or completely obsolete. You will also hear of the various trends over the years and their impact (whether positive or negative) on the current state of operations. The technology industry and the InfoSec industry in particular are accelerating rapidly. As crime adapts to the digital landscape, the pace of development forces criminals and network defenders to constantly try and gain an edge over one another, at an ever-increasing pace. Given the complexity of the computing environments and networks of today, the possibilities for attackers is seemingly limitless, while defenders must stay on top of the latest techniques and methods to remain relevant.

In Chapter 2, we introduced four core questions to guide our playbook methodology:

§ What are we trying to protect?

§ What are the threats?

§ How do we detect them?

§ How do we respond?

Though the environments in which those questions are asked and answered may change over time, the underlying methodology ensures a repeatable process that can adapt with changing technologies, vendors, and products. At a micro level, the playbook allows for constructive adjustments, revision, replacement, or even retirement for any given monitoring objective. At a macro level, the playbook allows new plays and methods to be introduced, no matter what tools you are using, what threats you are facing, what network you are monitoring, or what trends you are following. As we mentioned in Chapter 6, even if you were somehow able to defend all your systems and detect all of the threats you face today, the pace of technology ensures that you will face something new tomorrow. A successful CSIRT will have, at its core, a solid foundational playbook and the ability to apply that living model to a rapidly changing security landscape.