Threat Modeling: Designing for Security (2014)
Part II. Finding Threats
Chapter 5. Attack Libraries
Some practitioners have suggested that STRIDE is too high level, and should be replaced with a more detailed list of what can go wrong. Insofar as STRIDE being abstract, they're right. It could well be useful to have a more detailed list of common problems.
A library of attacks can be a useful tool for finding threats against the system you're building. There are a number of ways to construct such a library. You could collect sets of attack tools; either proof-of-concept code or fully developed (“weaponized”) exploit code can help you understand the attacks. Such a collection, where no modeling or abstraction has taken place, means that each time you pick up the library, each participant needs to spend time and energy creating a model from the attacks. Therefore, a library that provides that abstraction (and at a more detailed level than STRIDE) could well be useful. In this chapter, you'll learn about several higher-level libraries, including how they compare to checklists and literature reviews, and a bit about the costs and benefits of creating a new one.
Properties of Attack Libraries
As stated earlier, there are a number of ways to construct an attack library, so you probably won't be surprised to learn that selecting one involves trade-offs, and that different libraries address different goals. The major decisions to be made, either implicitly or explicitly, are as follows:
§ Detail versus abstraction
Audience refers to whom the library targets. Decisions about audience dramatically influence the content and even structure of a library. For example, the “Library of Malware Traffic Patterns” is designed for authors of intrusion detection tools and network operators. Such a library doesn't need to spend much, if any, time explaining how malware works.
The question of detail versus abstraction is about how many details are included in each entry of the library. Detail versus abstraction is, in theory, simple. You pick the level of detail at which your library should deliver, and then make sure it lands there. Closely related is structure, both within entries and between them. Some libraries have very little structure, others have a great deal. Structure between entries helps organize new entries, while structure within an entry helps promote consistency between entities. However, all that structure comes at a cost. Elements that are hard to categorize are inevitable, even when the things being categorized have some form of natural order, such as they all descend from the same biological origin. Just ask that egg-laying mammal, the duck-billed platypus. When there is less natural order (so to speak), categorization is even harder. You can conceptualize this as shown in Figure 5.1.
Figure 5.1 Abstraction versus detail
Scope is also an important characteristic of an attack library. If it isn't shown by a network trace, it probably doesn't fit the malware traffic attack library. If it doesn't impact the web, it doesn't make sense to include it in the OWASP attack libraries.
There's probably more than one sweet spot for libraries. They are a balance of listing detailed threats while still being thought provoking. The thought-provoking nature of a library is important for good threat modeling. A thought-provoking list means that some of the engineers using it will find interesting and different threats. When the list of threats reaches a certain level of granularity, it stops prompting thinking, risks being tedious to apply, and becomes more and more of a checklist.
The library should contain something to help remind people using it that it is not a complete enumeration of what could go wrong. The precise form of that reminder will depend on the form of the library. For example, in Elevation of Privilege, it is the ace card(s), giving an extra point for a threat not in the game.
Closely related to attack libraries are checklists and literature reviews, so before examining the available libraries, the following section looks at checklists and literature reviews.
Libraries and Checklists
Checklists are tremendously useful tools for preventing certain classes of problems. If a short list of problems is routinely missed for some reason, then a checklist can help you ensure they don't recur. Checklists must be concise and actionable.
Many security professionals are skeptical, however, of “checklist security” as a substitute for careful consideration of threats. If you hate the very idea of checklists, you should read The Checklist Manifesto by Atul Gawande. You might be surprised by how enjoyable a read it is. But even if you take a big-tent approach to threat modeling, that doesn't mean checklists can replace the work of trained people using their judgment.
A checklist helps people avoid common problems, but the modeling of threats has already been done when the checklist is created. Therefore, a checklist can help you avoid whatever set of problems the checklist creators included, but it is unlikely to help you think about security. In other words, using a checklist won't help you find any threats not on the list. It is thus narrower than threat modeling.
Because checklists can still be useful as part of a larger threat modeling process, you can find a collection of them at the end of Chapter 1, “Dive In and Threat Model!” and throughout this book as appropriate. The Elevation of Privilege game, by the way, is somewhat similar to a checklist. Two things distinguish it. The first is the use of aces to elicit new threats. The second is that by making threat modeling into a game, players are given social permission to playfully analyze a system, to step beyond the checklist, and to engage with the security questions in play. The game implicitly abandons the “stop and check in” value that a checklist provides.
Libraries and Literature Reviews
A literature review is roughly consulting the library to learn what has happened in the past. As you saw in Chapter 2, “Strategies for Threat Modeling,” reviewing threats to systems similar to yours is a helpful starting point in threat modeling. If you write up the input and output of such a review, you may have the start of an attack library that you can reuse later. It will be more like an attack library if you abstract the attacks in some way, but you may defer that to the second or third time you review the attack list.
Developing a new library requires a very large time investment, which is probably part of why there are so few of them. However, another reason might be the lack of prescriptive advice about how to do so. If you want to develop a literature review into a library, you need to consider how the various attacks are similar and how they differ. One model you can use for this is a zoo. A zoo is a grouping—whether of animals, malware, attacks, or other things—that taxonomists can use to test their ideas for categorization. To track your zoo of attacks, you can use whatever form suits you. Common choices include a wiki, or a Word or Excel document. The main criteria are ease of use and a space for each entry to contain enough concrete detail to allow an analyst to dig in.
As you add items to such a zoo, consider which are similar, and how to group them. Be aware that all such categorizations have tricky cases, which sometimes require reorganization to reflect new ways of thinking about them. If your categorization technique is intended to be used by multiple independent people, and you want what's called “inter-rater consistency,” then you need to work on a technique to achieve that. One such technique is to create a flowchart, with specific questions from stage to stage. Such a flowchart can help produce consistency.
The work of grouping and regrouping can be a considerable and ongoing investment. If you're going to create a new library, consider spending some time first researching the history and philosophy of taxonomies. Books like Sorting Things Out: Classification and Its Consequences (Bowker, 2000) can help.
The CAPEC is MITRE's Common Attack Pattern Enumeration and Classification. As of this writing, it is a highly structured set of 476 attack patterns, organized into 15 groups:
§ Data Leakage
§ Attacks Resource Depletion
§ Injection (Injecting Control Plane content through the Data Plane)
§ Time and State Attacks
§ Abuse of Functionality
§ Probabilistic Techniques
§ Exploitation of Authentication
§ Exploitation of Privilege/Trust
§ Data Structure Attacks
§ Resource Manipulation
§ Network Reconnaissance
§ Social Engineering Attacks
§ Physical Security Attacks
§ Supply Chain Attacks
Each of these groups contains a sub-enumeration, which is available via MITRE (2013b). Each pattern includes a description of its completeness, with values ranging from “hook” to “complete.” A complete entry includes the following:
§ Typical severity
§ A description, including:
§ Attack execution flow
§ Method(s) of attack
§ Attacker skills or knowledge required
§ Resources required
§ Probing techniques
§ Indicators/warnings of attack
§ Solutions and mitigations
§ Attack motivation/consequences
§ Relevant security requirements, principles and guidance
§ Technical context
§ A variety of bookkeeping fields (identifier, related attack patterns and vulnerabilities, change history, etc.)
An example CAPEC is shown in Figure 5.2.
Figure 5.2 A sample CAPEC entry
You can use this very structured set of information for threat modeling in a few ways. For instance, you could review a system being built against either each CAPEC entry or the 15 CAPEC categories. Reviewing against the individual entries is a large task, however; if a reviewer averages five minutes for each of the 475 entries, that's a full 40 hours of work. Another way to use this information is to train people about the breadth of threats. Using this approach, it would be possible to create a training class, probably taking a day or more.
The appropriate exit criteria for using CAPEC depend on the mode in which you're using it. If you are performing a category review, then you should have at least one issue per categories 1–11 (Data Leakage, Resource Depletion, Injection, Spoofing, Time and State, Abuse of Functionality, Probabilistic Techniques, Exploitation of Authentication, Exploitation of Privilege/Trust, Data Structure Attacks, and Resource Manipulation) and possibly one for categories 12–15 (Network Reconnaissance, Social Engineering, Physical Security, Supply Chain).
Perspective on CAPEC
Each CAPEC entry includes an assessment of its completion, which is a nice touch. CAPECs include a variety of sections, and its scope differs from STRIDE in ways that can be challenging to unravel. (This is neither a criticism of CAPEC, which existed before this book, nor a suggestion that CAPEC change.)
The impressive size and scope of CAPEC may make it intimidating for people to jump in. At the same time, that specificity may make it easier to use for someone who's just getting started in security, where specificity helps to identify attacks. For those who are more experienced, the specificity and apparent completeness of CAPEC may result in less creative thinking. I personally find that CAPEC's impressive size and scope make it hard for me to wrap my head around it.
CAPEC is a classification of common attacks, whereas STRIDE is a set of security properties. This leads to an interesting contrast. CAPEC, as a set of attacks, is a richer elicitation technique. However, when it comes to addressing the CAPEC attacks, the resultant techniques are far more complex. The STRIDE defenses are simply those approaches that preserve the property. However, looking up defenses is simpler than finding the attacks. As such, CAPEC may have more promise than STRIDE for many populations of threat modelers. It would be fascinating to see efforts made to improve CAPEC's usability, perhaps with cheat sheets, mnemonics, or software tools.
OWASP Top Ten
OWASP, The Open Web Application Security Project, offers a Top Ten Risks list each year. In 2013, the list was as follows:
§ Broken Authentication and Session Management
§ Cross-Site Scripting
§ Insecure Direct Object References
§ Security Misconfiguration
§ Sensitive Data Exposure
§ Missing Function Level Access Control
§ Cross Site Request Forgery
§ Components with Known Vulnerabilities
§ Invalidated Requests and Forwards [sic]
This is an interesting list from the perspective of the threat modeler. The list is a good length, and many of these attacks seem like they are well-balanced in terms of attack detail and its power to provoke thought. A few (cross-site scripting and cross-site request forgery) seem overly specific with respect to threat modeling. They may be better as input into test planning.
Each has backing information, including threat agents, attack vectors, security weaknesses, technical and business impacts, as well as details covering whether you are vulnerable to the attack and how you prevent it.
To the extent that what you're building is a web project, the OWASP Top Ten list is probably a good adjunct to STRIDE. OWASP updates the Top Ten list each year based on the input of its volunteer membership. Over time, the list may be more or less valuable as a threat modeling attack library.
The OWASP Top Ten are incorporated into a number of OWASP-suggested methodologies for web security. Turning the Top Ten into a threat modeling methodology would likely involve creating something like a STRIDE-per-element approach (Top Ten per Element?) or looking for risks in the list at each point where a data flow has crossed a trust boundary.
By providing mode specifics, attack libraries may be useful to those who are not deeply familiar with the ways attackers work. It is challenging to find generally useful sweet spots between providing lots of details and becoming tedious. It is also challenging to balance details with the threat of fooling a reader into thinking a checklist is comprehensive. Performing a literature review and capturing the details in an attack library is a good way for someone to increase their knowledge of security.
There are a number of attack libraries available, including CAPEC and the OWASP Top Ten. Other libraries may also provide value depending on the technology or system on which you're working.