Security for Web Developers (2015)
I Developing a Security Plan
1 Defining the Application Environment
Data is the most important resource that any business owns. It’s literally possible to replace any part of a business except the data. When the data is modified, corrupted, stolen, or deleted, a business can suffer serious loss. In fact, a business that has enough go wrong with its data can simply cease to exist. The focus of security, therefore, is not hackers, applications, networks, or anything else someone might have told you—it’s data. Therefore, this book is about data security, which encompasses a broad range of other topics, but it’s important to get right to the point of what you’re really looking to protect when you read about these other topics.
Unfortunately, data isn’t much use sitting alone in the dark. No matter how fancy your server is, no matter how capable the database that holds the data, the data isn’t worth much until you do something with it. The need to manage data brings applications into the picture and the use of applications to manage data is why this introductory chapter talks about the application environment.
However, before you go any further, it’s important to decide precisely how applications and data interact because the rest of the chapter isn’t very helpful without this inside. An application performs just four operations on data, no matter how incredibly complex the application might become. You can define these operations by the CRUD acronym:
• Create
• Read
• Update
• Delete
The sections that follow discuss data, applications, and CRUD as they relate to the web environment. You discover how security affects all three aspects of web development, keeping in mind that even though data is the focus, the application performs the required CRUD tasks. Keeping your data safe means understanding the application environment and therefore the threats to the data the application manages.
Specifying Web Application Threats
You can find lists of web application threats all over the Internet. Some of the lists are quite complete and don’t necessarily have a bias, some address what the author feels are the most important threats, some lists tell you about the most commonly occurring threats, and you can find all sorts of other lists out there. The problem with all these lists is that the author doesn’t know your application. A SQL injection attack is only useful if your application uses SQL in some way—perhaps it doesn’t.
Obviously, you need to get ideas on what to check from somewhere and these lists do make a good starting place. However, you need to consider the list content in light of your application. In addition, don’t rely on just one list—use multiple lists so that you obtain better coverage of the threats that could possibly threaten your application. With this need in mind, here is a list of the most common threats you see with web applications today:
• Buffer Overflow: An attacker manages to send enough data in an input buffer to overflow an application or output buffer. As a result, memory outside the buffer becomes corrupted. Some forms of buffer overflow allow the attacker to perform seemingly impossible tasks because the affected memory contains executable code. The best way to overcome this problem is to perform range and size checks on any data, input or output, that your application handles.
• Code Injection: An entity adds code to the data stream flowing between a server and a client (such as a browser). The target often views the added code as part of the original page, but it could contain anything. Of course, the target may not even see the injected code. It might be lurking in the background ready to cause all sorts of problems for your application. A good way to overcome this attack is to ensure you use encrypted data streams, the HTTPS protocol, and code verification (when possible). Providing a client feedback mechanism is also a good idea.
Code injection occurs more often than you might think. In some cases, the code injection isn’t even part of an attack, but it might as well be. A recent article (see http://www.infoworld.com/article/2925839/netneutrality/code-injection-new-low-isps.html) discusses how Internet Service Providers (ISPs) are injecting JavaScript code into the data stream in order to overlay ads on top of a page. In order to determine what sort of ad to provide, the ISP also monitors the traffic.
• Cross-site Scripting (XSS): An attacker injects JavaScript or other executable code into the output stream of your application. The recipient sees your application as the source of the infection, even when it isn’t. In most cases, you don’t want to allow users to send data directly to each other through your application without strict verification. A moderated format for applications such as blogs is a must to ensure your application doesn’t end up serving viruses or worse along with seemingly benign data.
Few experts remind you to check your output data. However, you don’t actually know that your own application is trustworthy. A hacker could modify it to allow tainted output data. Verification checks should include output data as well as input data.
• File Uploads: Every file upload, even those that might seem otherwise innocuous, is suspect. If possible, disallow file uploads to your server. Of course, it isn’t always possible to provide this level of security, so you need to allow just certain types of file and then scan the file for problems. Authenticating the file as much as is possible is always a good idea. For example, some files contain a signature at the beginning that you can use to ensure the file is legitimate. Don’t rely on file extension exclusion alone—hackers often make one file look like another type in order to bypass server security.
• Hard Coded Authentication: Developers often place authentication information in application initialization files for testing purposes. It’s essential to remove these hard coded authentication entries and rely on a centralized data store for security information instead. Keeping the data store in a secure location, off the server used for web applications, is essential to ensuring that hackers can’t simply view the credentials used to access the application in certain ways. If you do need initialization files for the application, make sure these files reside outside the webroot directory to ensure that hackers can’t discover them accidentally.
• Hidden or Restricted File/Directory Discovery: When your application allows input of special characters such as the forward slash (/) or backslash (\), it’s possible for a hacker to discover hidden or restricted files and directories. These locations can contain all sorts of information that a hacker can find useful in attacking your system. Disallowing use of special characters whenever possible is a great idea. In addition, store critical files outside the webroot directory in locations that the operating system can control directly.
• Missing or Incorrect Authentication: It’s important to know whom you’re dealing with, especially when working with sensitive data. Many web applications rely on common accounts for some tasks, which means it’s impossible to know who has accessed the account. Avoid using guest accounts for any purpose and assign each user a specific account to use.
• Missing or Incorrect Authorization: Even if you know the person you’re dealing with, it’s important to provide only the level of authorization needed to perform a given task. In addition, the authorization should reflect the user’s method of access. A desktop system accessing the application from the local network is likely more secure than a smartphone accessing the application from the local coffee shop. Relying on security promotion to assist in sensitive tasks lets you maintain minimal rights the rest of the time. Anything you can do to reduce what the user is authorized to do helps maintain a secure environment.
• Missing or Incorrect Encryption: Use encryption to transmit data of any sort between two endpoints to help keep hackers from listening in on your communication. It’s important to keep track of the latest encryption techniques and rely on the best encryption supported by the user’s environment. For example, Triple Data Encryption Standard (3DES) isn’t secure any longer, yet some organizations continue to use it. The current Advanced Encryption Standard (AES) remains mostly secure, but you want to use the largest key possible to help make it harder to crack.
• Operating System Command Injection: An attacker modifies an operating system command your application uses to perform specific tasks. Your web-based application probably shouldn’t use operating system calls in the first place. However, if you absolutely must make operating system calls, make sure the application runs in a sandbox.
Some experts will emphasize validating input data for some uses and leave the requirement off for other uses. Always validate any data you receive from anywhere. You have no way of knowing what vehicle a hacker will use to obtain access to your system or cause damage in other ways. Input data is always suspect, even when the data comes from your own server. Being paranoid is a good thing when you’re performing security-related tasks.
• Parameter Manipulation: Hackers can experiment with parameters passed as part of the request header or URL. For example, when working with Google, you can change the URL and the results of your search. Make sure you encrypt any parameters you pass between the browser and the server. In addition, use secure web page protocols, such as HTTPS, when passing parameters.
• Remote Code Inclusion: Most web applications today rely on included libraries, frameworks, and APIs. In many cases, the include statement contains a relative path or uses a variable containing a hard coded path to make it easier to change the location of the remote code later. When a hacker is able to gain access to the path information and change it, it’s possible to point the remote code inclusion to any code the hacker wants, giving the hacker full access to the application. The best way to avoid this particular problem is to use hard coded full paths whenever possible, even though this action makes it harder to maintain the code.
Many experts will recommend that you use vetted libraries and frameworks to perform dangerous tasks. However, these add-ons are simply more code. Hackers find methods for corrupting and circumventing library and framework code on a regular basis. You still have a need to ensure your application and any code it relies upon interacts with outside elements safely, which means performing extensive testing. Using libraries and frameworks does reduce your support costs and ensures that you get timely fixes for bugs, but the bugs still exist and you still need to be on guard. There is no security silver bullet. Chapter 6 contains more information about working with libraries and frameworks.
• Session Hijacking: Every time someone logs into your web server, the server gives that user a unique session. A session hijacker jumps into the session and intercepts data transferred between the user and the server. The three common places to look for information used to hijack a session are: cookies, URL rewriting, and hidden fields. Hackers look for session information in these places. By keeping the session information encrypted, you can reduce the risk of someone intercepting it. For example, make sure you rely on the HTTPS protocol for logins. You also want to avoid doing things like making your session IDs predictable.
• SQL Injection: An attacker modifies a query that your application creates as the result of user or other input. In many cases, the application requests query input data, but it receives SQL elements instead. Other forms of SQL injection attack involve the use of escape or other unexpected characters or character sequences. A good way to avoid SQL injection attacks is to avoid dynamically generated queries.
This may look like a lot of different threats, but if you search long enough online, you could easily triple the size of this list and not even begin to scratch the surface of the ways in which a hacker can make your life interesting. As this book progresses, you’ll encounter a much larger number of threat types and start to discover ways to overcome them. Don’t worry, in most cases the fixes end up being common sense and a single fix can resolve more than one problem. For example, look through the list again and you’ll find that simply using HTTPS solves a number of these problems.
Considering the Privacy Aspect of Security
When delving into security, an organization tends to focus first on its own data security. After all, if the organization’s data becomes lost, corrupted, modified, or otherwise unusable, the organization could go out of business. The next level of scrutiny usually resides with third parties, such as partners. Often, the security of user data comes last and many organizations don’t think too much about customer data security at all. The problem is that many users and customers see the safety of their data as paramount. The whole issue of privacy comes down to the protection of user data such that no one misuses or exposes the information without the user’s knowledge and consent. In short, when building an application, you must also consider the privacy of user data as a security issue and an important one at that.
A recent article points out that users and customers view the tech industry as poor trustees of their data (http://www.infoworld.com/article/2925292/internet-privacy/feds-vs-silicon-valley-who-do-you-trust-less.html). In fact, the tech industry has actually fallen behind the government—people trust the government to safeguard their information more often. Many tech companies publicly support enhanced security policies for other entities (such as the government) and privately build more ways to thwart any notion of privacy that the user or customer might have. This duality makes the situation even worse than it might otherwise be if the tech industry were open about the encroachment on user and customer data.
In order to create a truly secure application, you must be willing to secure every aspect of it, including user and customer data. This act requires that the application only obtain and manage the data necessary to perform its task and that it discard that data when no longer needed. Trust is something that your application can gain only when it adheres to the same set of rules for working with all data, no matter its source.
Understanding Software Security Assurance (SSA)
The purpose of software is to interact with data. However, software itself is a kind of data. In fact, data comes in many forms that you might not otherwise consider and the effect of data is wider ranging that you might normally think. With the Internet of Things (IoT), it’s now possible for data to have both abstract and physical effects in ways that no one could imagine even a few years ago. A hacker gaining access to the right application can do things like damage the electrical grid or poison the water system. On a more personal level, the same hacker could potentially raise the temperature of your home to some terrifying level, turn off all the lights, spy on you through your webcam, or do any of a number of other things. The point of SSA is that software needs some type of regulation to ensure it doesn’t cause the loss, inaccuracy, alteration, unavailability, or misuse of the data and resources that it uses, controls, and protects. This requirement appears as part of SSA. The following sections discuss SSA in more detail.
SSA isn’t an actual standard at this time. It’s a concept that many organizations quantify and put into writing based on that organization’s needs. The same basic patterns appear in many of these documents and the term SSA refers to the practice of ensuring software remains secure. You can see how SSA affects many organizations, such as Oracle (http://www.oracle.com/us/support/assurance/overview/index.html) and Microsoft (https://msdn.microsoft.com/library/windows/desktop/84aed186-1d75-4366-8e61-8d258746bopq.aspx) by reviewing that organizations SSA documentation online. In fact, many large organizations now have some form of SSA in place.
Considering the OSSAP
One of the main sites you need to know about in order to make SSA a reality in web applications is the Open Web Application Security Project (OWASP) (https://www.owasp.org/index.php/OWASP_Software_Security_Assurance_Process) (see Figure 1-1). The site breaks down the process required to make the OWASP Security Software Assurance Process (OSSAP) part of the Software Development Lifecycle (SDLC). Yes, that’s a whole bunch of alphabet soup, but you need to know about this group in order to create a process for your application that matches the work done by other organizations. In addition, the information on this site helps you develop a security process for your application that actually works, is part of the development process, and won’t cost you a lot of time in creating your own process.
Figure 1-1. The OWASP site tells you about SSA for web applications.
Even though OSSAP does provide a great framework for ensuring your application meets SSA requirements, there is no requirement that you interact with this group in any way. The group does license its approach to SSA. However, at this time, the group is just getting underway and you’ll find a lot of TBDs on the site will the group plans to fill in as time passes. Of course, you need a plan for today, so OWASP and its OSSAP present a place for you to research solutions for now and possibly get additional help later.
The whole reason to apply SSA to your application as part of the SDLC is to ensure that the software is as reliable and error free as you can make it. When talking with some people, the implication is that SSA will fix every potential security problem that you might encounter, but this simply isn’t the case. SSA will improve your software, but you can’t find any pieces of software anywhere that are error free. Assuming that you did manage to create a piece of error free software, you still have user, environment, network, and all software of other security issues to consider. Consequently, SSA is simply one piece of a much larger security picture and implementing SSA will only fix so many security issues. The best thing to do is to continue seeing security as an ongoing process.
Defining SSA Requirements
The initial step in implementing SSA as part of your application is to define the SSA requirements. These requirements help you determine the current state of your software, the issues that require resolution, and the severity of those issues. After the issues are defined, you can determine the remediation process and any other requirements needed to ensure that the software remains secure. In fact, you can break SSA down into eight steps:
1. Evaluate the software and develop a plan to remediate it.
2. Define the risks that the security issues represent to the data and categorize these risks to remediate the worst risks first.
3. Perform a complete code review.
4. Implement the required changes.
5. Test the fixes you create and verify that they actually do work on the production system.
6. Define a defense for protecting application access and therefore the data that the application manages.
7. Measure the effectiveness of the changes you have made.
8. Educate management, users, and developers in the proper methods to ensure good application security.
Categorizing Data and Resources
This process involves identifying the various pieces of data that your application touches in some way, including its own code and configuration information. Once you identify every piece of data, you categorize it to identify the level of security required to protect that data. Data can have many levels of categorization and the way in which you categorize the data depends on your organization’s needs and the orientation of the data. For example, some data may simply inconvenience the organization, while other data could potentially cause harm to humans. The definition of how data security breaches affects the security environment as a whole is essential.
After the data categorization process is complete, it’s possible to begin using the information to perform a variety of tasks. For example, you can consider how to reduce vulnerabilities by:
• Creating coding standards
• Implementing mandatory developer training
• Hiring security leaders within development groups
• Using automated testing procedures that specifically locate security issues
All of these methods point to resources that the organization interacts with and relies upon to ensure the application manages data correctly. Categorizing resources means determining how much emphasis to place on a particular resource. For example, denying developers training will have a bigger impact than denying individual application users training because the developers work with the application as a whole. Of course, training is essential for everyone. In this case, categorizing resources of all sorts helps you determine where and how to spend money in order to obtain the best Return on Investment (ROI), while still meeting application security goals.
Performing the Required Analysis
As part of SSA, you need to perform an analysis on your application. It’s important to know precisely what sorts of weaknesses your code could contain. The operative word here is “could.” Until you perform analysis in depth, you have no way of knowing the actual security problems in your code. Web applications are especially adept at hiding issues because, unlike desktop applications, the code can appear in numerous places and scripts tend to hide problems that compiled applications don’t have because the code is interpreted at runtime, rather than compile time.
It’s important to understand that security isn’t just about the code—it’s also about the tools required to create the code and the skill of the developers employing those tools. When an organization chooses the wrong tools for the job, the risk of a security breach becomes much higher because the tools may not create code that performs precisely as expected. Likewise, when developers using the tool don’t have the required skills, it’s hardly surprising that the software has security holes that a more skilled developer would avoid.
Some experts claim that there are companies that actually allow substandard work. In most cases, the excuse for allowing such work is that the application development process is behind schedule or that the organization lacks required tools or expertise. The fact that an organization may employ software designed to help address security issues (such as a firewall), doesn’t alieve the developer of the responsibility to create secure code. Organizations need to maintain coding standards to ensure a good result.
Logic
Interacting with an application and the data it manages is a process. Even though users might perform tasks in a seemingly random fashion, specific tasks follow patterns that occur because the user must follow a procedure in order to obtain a good result. By documenting and understanding these procedures, you can analyze application logic from a practical perspective. Users rely on a particular procedure because of the way in which developers design the application. Changing the design will necessarily change the procedure.
The point of the analysis is to look for security holes in the procedure. For example, the application may allow the user to remain logged in, even if it doesn’t detect activity for an extended period. The problem is that the user might not even be present—someone else could access the application using the users credentials and no one would be the wiser because everyone would think that the user is logged in using the same system as always.
However, data holes can take other forms. A part number might consist of various quantifiable elements. In order to obtain a good part number, the application could ask for the elements, rather than the part number as a whole, and build the part number from those elements. The idea is to make the procedure cleaner, clearer, and less error prone so that the database doesn’t end up containing a lot of bad information.
Data
It may not seem like you can perform much analysis on data from a security perspective, but there really are a lot of issues to consider. In fact, data analysis is one of the areas where organizations fall down most because the emphasis is on how to manage and use the data, rather than on how to secure the data (it’s reasonable to assume you need to address all three issues). When analyzing the data, you must consider these issues:
• Who can access the data
• What format is used to store the data
• When the data is accessible
• Where the data is stored
• Why each data item is made available as part of the application
• How the data is broken into components and the result of combining the data for application use
For example, some applications fail to practice data hiding, which is an essential feature of any good application. Data hiding means giving the user only the amount of information actually needed to perform any given task.
Applications also format some data incorrectly. For example, storing passwords as text will almost certainly cause problems should someone break in. A better route is to store the password hash. The hash isn’t at all valuable to someone who has broken in because the application needs the password on which the hash is based.
Making all data accessible all the time is also a bad idea. Sensitive data should only appear on screen when someone is available to monitor its use and react immediately should the user do something unexpected.
Storing sensitive data in the cloud is a particularly bad idea. Yes, using cloud storage makes the data more readily available and faster to access as well, but it also makes the data vulnerable. Store sensitive data on local servers when you have direct access to all the security features used to keep the data safe.
Application developers also have a propensity for making too much information available. You use data hiding to keep manager-specific data hidden from other kinds of users. However, some data has no place in the application at all. If no one actually needs a piece of data to perform a task, then don’t add the data to the application.
Many data items today are an aggregation of other data elements. It’s possible for a hacker to learn a lot about your organization by detecting the form of aggregation used and taking the data item apart to discover the constituent parts. It’s important to consider how the data is put together and to add safeguards that make it harder to discover the source of that data.
Interface
A big problem with software today is the inclusion of gratuitous features. An application is supposed to meet a specific set of goals, perform a specific set of tasks. Invariably, someone gets the idea that the software might be somehow better if it had certain features that have nothing to do with the core goals the software is supposed to meet. The term feature bloat has been around for a long time. You normally see it discussed in a monetary sense—as the source of application speed problems, the elevator of user training costs, and the wrecker of development schedules. However, application interface issues, those that are often most affected by feature bloat, have a significant impact on security in the form of increased attack surface. Every time you increase the attack surface, you provide more opportunities for a hacker to obtain access to your organization. Getting rid of gratuitous features or moving them to an entirely different application, will reduce the attack surface—making your application a lot more secure. Of course, you’ll save money too.
Another potential problem is the hint interface—one that actually gives the security features of the application away by providing a potential hacker with too much information or too many features. Even though the password used to help a user retrieve a lost password is necessary, some implementations actually make it possible for a hacker to retrieve the user’s password and become that user. The hacker might even lock the real user out of the account by changing the password (although, this action would be counterproductive because an administrator could restore the user’s access quite easily). A better system is to ensure that the user actually made the request before doing anything and then ensuring that the administrator sends the login information in a secure manner.
Constraint
A constraint is simply a method of ensuring that actions meet specific criteria before the action is allowed. For example, disallowing access to data elements unless the user has a right to access them is a kind of constraint. However, constraints have other forms that are more important. The most important constraint is determining how any given user can manage data. Most users only require read access to data, yet applications commonly provide read/write access, which opens a huge security hole.
Data has constraints to consider as well. When working with data, you must define precisely what makes the data unique and ensure the application doesn’t break any rules regarding that uniqueness. With this in mind, you generally need to consider these kinds of constraints:
• Ensure the data is the right type
• Define the range of values the data can accept
• Specify the maximum and minimum data lengths
• List any unacceptable data values
Delving into Language-specific Issues
The application environment is defined by the languages use to create the application. Just as every language has functionality that makes it perform certain tasks well; every language also has potential problems that make it a security risk. Even low-level languages, despite their flexibility, have problems induced by complexity. Of course, web-based applications commonly rely on three particular languages: HTML, CSS, and JavaScript. The following sections describe some of the language specific issues related to these particular languages.
Defining the Key HTML Issues
HTML5 has become extremely popular because it supports an incredibly broad range of platforms. The same application can work well on a user’s desktop, tablet, and smartphone without any special coding on the part of the developer. Often, libraries, APIs, and microservices provide content in a form that matches the host system automatically, without any developer intervention. However, the flexibility that HTML5 provides can also be problematic. The following list describes some key security issues you experience when working with HTML5.
• Code Injection: HTML5 provides a large number of ways in which a hacker could inject malicious code, including sources you might not usually consider suspicious, such as a YouTube video or streamed music.
• User Tracking: Because your application uses code from multiple sources in most cases, you might find that a library, API, or microservice actually performs some type of user tracking that a hacker could use to learn more about your organization. Every piece of information you give a hacker makes the process of overcoming your security easier.
• Tainted Inputs: Unless you provide your own input checking, HTML5 lets any input the user wants to provide through. You may only need a numeric value, but the user could provide a script instead. Trying to check inputs thoroughly to ensure you really are getting what you requested is nearly impossible on the client side, so you need to ensure you have robust server-side checking as well.
Defining the Key CSS Issues
Applications rely heavily on CSS3 to create great looking presentations without hard coding the information for every device. Libraries of pre-existing CSS3 code makes it easy to create professional looking applications that a user can change to meet any need. For example, a user may need a different presentation for a particular device or require the presentation use a specific format to meet a special need. The following list describes some key security issues you experience when working with CSS3.
• Overwhelming the Design: A major reason that CSS3 code causes security issues is that the design is overwhelmed. The standards committee originally designed CSS to control the appearance of HTML elements, not to affect the presentation of an entire web page. As a result, the designers never thought to include security for certain issues because CSS wasn’t supposed to work in those areas. The problem is that the cascade part of CSS doesn’t allow CSS3 to know about anything other than its parent elements. As a result, a hacker can create a presentation that purports to do one thing, when it actually does another. Some libraries, such as jQuery, can actually help you overcome this issue.
• Uploaded CSS: In some cases, an application designer will allow a user to upload a CSS file to achieve a particular application appearance or make it work better with a specific platform. However, the uploaded CSS can also contain code that makes it easier for a hacker to overwhelm any security you have in place or to hide dirty dealings from view. For example, a hacker could include URLs in the CSS that redirect the application to unsecure servers.
• CSS Shaders: A special use of CSS can present some extreme problems by allowing access to the user agent data and cross-domain data. Later chapters in the book will discuss this issue in greater detail, but you can get a quick overview of the topic athttp://www.w3.org/Graphics/fx/wiki/CSS_Shaders_Security. The big thing is that sometimes the act of rendering data on screen opens potential security holes you might not have considered initially.
Defining the Key JavaScript Issues
The combination of JavaScript with HTML5 has created the whole web application phenomenon. Without the combination of the two languages, it wouldn’t be possible to create applications that run well anywhere on any device. Users couldn’t even think about asking for that sort of application in the past because it just wasn’t possible to provide it. Today, a user can perform work anywhere using a device that’s appropriate for the location. However, JavaScript is a scripted language that can have some serious security holes. The following list describes some key security issues you experience when working with JavaScript.
• Cross-site Scripting (XSS): This issue appears earlier in the chapter because it’s incredibly serious. Any time you run JavaScript outside a sandboxed environment, it becomes possible for a hacker to perform all sorts of nasty tricks on your application.
• Cross-site Request Forgery (CSRF): A script can use the user’s credentials that are stored in a cookie to gain access to other sites. While on these sites, the hacker can perform all sorts of tasks that the application was never designed to perform. For example, a hacker can perform account tampering, data theft, fraud, any many other illegal activities, all in the user’s name.
• Browser and Browser Plug-in Vulnerabilities: Many hackers rely on known browser and browser-plug in vulnerabilities to force an application to perform tasks that it wasn’t designed to do. For example, a user’s system could suddenly become a zombie transmitting virus code to other systems. The extent of what a hacker can do is limited by the vulnerabilities in question. In general, you want to ensure that you install any updates and that you remain aware of how vulnerabilities can affect your application’s operation.
Considering Endpoint Defense Essentials
An endpoint is a destination for network traffic, such as a service or a browser. When packets reach the endpoint, the data they contain is unpacked and provided to the application for further processing. Endpoint security is essential because endpoints represent a major point of entry for networks. Unless the endpoint is secure, the network will receive bad data transmissions. In addition, broken endpoint security can cause harm to other nodes on the network. The following sections discuss three phases of endpoint security: prevention, detection, and remediation.
It’s important not to underestimate the effect of endpoint security on applications and network infrastructure. Some endpoint scenarios become quite complex and their consequences hard to detect or even understand. For example, a recent article discusses a router attack that depends on the attacker directing an unsuspecting user to a special site: http://www.infoworld.com/article/2926221/security/large-scale-attack-hijacks-routers-through-users-browsers.html. The attack focuses on the router that the user depends upon to make Domain Name System (DNS) requests. By obtaining full control over the router, the attacker can redirect the user to locations that the attacker controls.
Preventing Security Breaches
The first step in avoiding a trap is to admit the trap exists in the first place. The problem is that most companies today don’t think that they’ll experience a data breach—it always happens to the other company—the one with lax security. However, according to the Poneman Institute’s 2014 Cost of Cyber Crime report, the cost of cybercrime was $12.7 million in 2014, which is up from the $6.5 million in 2010 (http://info.hpenterprisesecurity.com/LP_CP_424710_Ponemon_ALL). Obviously, all those break-ins don’t just happen at someone else’s company—they could easily happen at yours, so it’s beneficial to assume that some hacker, somewhere, has targeted your organization. In fact, if you start out with the notion that a hacker will not only break into your organization, but also make off with the goods, you can actually start to prepare for the real world scenario. Any application you build must be robust enough to:
• Withstand common attacks
• Report intrusions when your security fails to work as expected
• Avoid making assumptions about where breaches will occur
• Assume that, even with training, users will make mistakes causing a breach
Don’t assume that security breaches only happen on some platforms. A security breach can happen on any platform that runs anything other than custom software. The less prepared that the developers for a particular platform are, the more devastating the breach becomes. For example, many people would consider Point-of-Sale (POS) terminals safe from attack. However, hackers are currently attacking these devices vigorously in order to obtain credit card information access (seehttp://www.computerworld.com/article/2925583/security/attackers-use-email-spam-to-infect-pos-terminals.html). The interesting thing about this particular exploit is that it wouldn’t work if employees weren’t using the POS terminals incorrectly. This is an instance where training and strong policies could help keep the system safe. Of course, the applications should still be robust enough to thwart attacks.
As the book progresses, you find some useful techniques for making a breach less likely. The essentials of preventing a breach, once you admit a breach can (and probably will) occur, are to:
• Create applications that users understand and like to use (see Chapter 2)
• Choose external data sources carefully (see the “Accessing External Data” section of this chapter for details)
• Build applications that provide natural intrusion barriers (see Chapter 4)
• Test the reliability of the code you create, and carefully record both downtime and causes (see Chapter 5)
• Choose libraries, APIs, and microservices with care (see the “Using External Code and Resources” section of this chapter for details)
• Implement a comprehensive testing strategy for all application elements, even those you don’t own (see Part III for details)
• Manage your application components to ensure application defenses don’t languish after the application is released (see Part IV for details)
• Keep up-to-date on current security threats and strategies for overcoming them (see Chapter 16)
• Train your developers to think about security from beginning to end of every project (see Chapter 17)
Detecting Security Breaches
The last thing that any company wants to happen is to hear about a security breach second or third hand. Reading about your organization’s inability to project user data in the trade press is probably the most rotten way to start any day, yet this is how many organizations learn about security breaches. Companies that assume a data breach has already occurred are the least likely to suffer permanent damage from a data breach and most likely to save money in the end. Instead of wasting time and resources fixing a data breach after it has happened, your company can detect the data breach as it occurs and stop it before it becomes a problem. Detection means providing the required code as part of your application and then ensuring these detection methods are designed to work with the current security threats.
Your organization, as a whole, will need a breach response team. However, your development team also needs individuals in the right places to detect security breaches. Most development teams today will need experts in:
• Networking
• Database management
• Application design and development
• Mobile technology
• Cyber forensics
• Compliance
Each application needs such a team and the team should meet regularly to discuss application-specific security requirements and threats. In addition, it’s important to go over various threat scenarios and determine what you might do when a breach does occur. By being prepared, you make it more likely that you’ll detect the breach early—possibly before someone in management comes thundering into your office asking for an explanation.
Remediating Broken Software
When a security breach does occur, whatever team your organization has in place must be ready to take charge and work through the remediation process. The organization, as a whole, needs to understand that not fixing the security breach and restoring the system as quickly as possible to its pre-breach state could cause the organization to fail. In other words, even if you’re a great employee, you may well be looking for a new job.
The person in charge of security may ask the development team to help locate the attacker. Security Information and Event Management (SIEM) software can help review logs that point to the source of the problem. Of course, this assumes your application actually creates appropriate logs. Part of the remediation process is to build logging and tracking functionality into the application in the first place. Without this information, trying to find the culprit so that your organization can stop the attack is often a lost cause.
Your procedures should include a strategy for checking for updates or patches for each component used by your application. Maintaining good application documentation is a must if you want to achieve this goal. It’s too late to create a list of external resources at the time of a breach, you must have the list in hand before the breach occurs. Of course, the development team will need to test any updates that the application requires in order to ensure that the breach won’t occur again. Finally, you need to ensure that the data has remained safe throughout the process and perform any data restoration your application requires.
Dealing with Cloud Storage
Cloud storage is a necessary evil in a world where employees demand access to data everywhere using any kind of device that happens to be handy. Users have all sorts of cloud storage solutions available, but one of the most popular now is Dropbox (https://www.dropbox.com/), which had amassed over 300 million users by the end of 2014. Dropbox (and most other cloud storage entities) have a checkered security history. For example, in 2011, Dropbox experienced a bug where anyone could access any account using any password for a period of four hours (see the article athttp://www.darkreading.com/vulnerabilities-and-threats/dropbox-files-left-unprotected-open-to-all/d/d-id/1098442). Of course, all these vendors will tell you that your application data is safe now that it has improved security. It isn’t a matter of if, but when, a hacker will find a way inside the cloud storage service or the service itself will drop the ball yet again.
A major problem with most cloud storage is that it’s public in nature. For example, Dropbox for Business sounds like a great idea and it does provide additional security features, but the service is still public. A business can’t host the service within its own private cloud.
In addition, most cloud services advertise that they encrypt the data on their servers, which is likely true. However, the service provider usually holds the encryption keys under the pretense of having to allow authorities with the proper warrants access to your data. Because you don’t hold the keys to your encrypted data, you can’t control access to it and the encryption is less useful than you might think.
Security of Web applications is a big deal because most applications tomorrow (if not all of them) will have a web application basis. Users want their applications available everywhere and the browser is just about the only means of providing that sort of functionality on so many platforms in an efficient manner. In short, you have to think about the cloud storage issues from the outset. You have a number of options for dealing with cloud storage as part of your application strategy.
• Block Access: It’s actually possible to block all access to cloud storage using a firewall, policy, or application feature. However, the ability to block access everywhere a user might want to access cloud storage is extremely hard and users are quite determined. In addition, blocking access can actually have negative effects on meeting business needs. For example, partners may choose to use cloud storage as a method for exchanging large files. A blocking strategy also incurs user wrath so that the users don’t work with your application or find ways to circumvent the functionality you sought to provide. This is the best option to choose when your organization has to manage large amounts of sensitive data, has legal requirements for protecting data, or simply doesn’t need the flexibility of using cloud storage.
• Allow Uncontrolled Access: You could choose to ignore the issues involved in using cloud storage. However, such a policy opens your organization to data loss, data breaches, and all sorts of other problems. Unfortunately, many organizations currently use this approach because controlling user access has become so difficult and the organization lacks the means of using some other approach.
• Relying on Company Mandated Security Locations: If you require users to access cloud storage using a company account, you can at least monitor file usage and have the means to recover data when an employee leaves. However, the basic problems with cloud storage remain. A hacker with the right knowledge could still access the account and grab your data or simply choose to snoop on you in other ways. This option does work well if your organization doesn’t manage data with legally required protections and you’re willing to exchange some security for convenience.
• Control Access Within the Application: Many cloud services support an Application Programming Interface (API) that allows you to interact with the service in unique ways. Even though this approach is quite time consuming, it does offer the advantage of letting you control where the user stores sensitive data, while still allowing the user the flexibility to use cloud storage for less sensitive data. You should consider this solution when your organization needs to interact with a large number of partners, yet also needs to manage large amounts of sensitive or critical data.
• Rely on a Third Party Solution: You can find third party solutions, such as Accellion (http://www.accellion.com/) that provide cloud storage connectors. The vendor provides a service that acts as an intermediary point between your application and the online data storage. The user is able to interact with data seamlessly, but the service controls access using policies that you set. The problem with this approach is that you now have an additional layer to consider when writing the application. In addition, you must trust the third party providing the connector. This particular solution works well when you need flexibility without the usual development costs and don’t want to create your own solution the relies on API access.
Using External Code and Resources
Most organizations today don’t have the time or resources needed to build applications completely from scratch. In addition, the costs of maintaining such an application would be enormous. In order to keep costs under control, organizations typically rely on third party code in various forms. The code performs common tasks and developers use it to create applications in a Lego-like manner. However, third party code doesn’t come without security challenges. Effectively you’re depending on someone else to write application code that not only works well and performs all the tasks you need, but does so securely. The following sections describe some of the issues surrounding the use of external code and resources.
Defining the Use of Libraries
A library is any code that you add into your application. Many people define libraries more broadly, but for this book, the essentials are that libraries contain code and that they become part of your application as you put the application in use. A commonly used library is jQuery (https://jquery.com/). It provides a wealth of functionality for performing common tasks in an application. The interesting thing about jQuery is that you find the terms library and API used interchangeably as shown in Figure 1-2.
Figure 1-2. Many sites use library and API interchangeably
Looking at the jQuery site also tells you about optimal library configurations. In fact, the way in which jQuery presents itself is a good model for any library that you want to use. The library is fully documented and you can find multiple examples of each library call (to ensure you can find an example that is at least close to what you want to do). More importantly, the examples are live, so you can actually see the code in action using the same browsers that you plan to use for your own application.
Like any other piece of software, jQuery has its faults too. As the book progresses, you’re introduced to other libraries and to more details about each one so that you can start to see how features and security go hand-in-hand. Because jQuery is such as large, complex library it has a lot to offer, but there is also more attack surface for hackers to exploit.
When working with libraries, the main focus of your security efforts is your application because you download the code used for the application from the host server. Library code is executed in-process, so you need to know that you can trust the source from which you get library code.Chapter 6 discusses the intricacies of using libraries as part of an application development strategy.
Defining the Use of APIs
An Application Programming Interface (API) is any code you can call as an out-of-process service. You send a request to the API and the API responds with some resource that you request. The resource is normally data of some type, but APIs perform other sorts of tasks too. The idea is that the code resides on another system and that it doesn’t become part of your application. Because APIs work with a request/response setup, they tend to offer broader platform support than libraries, but they also work slower than libraries do because the code isn’t local to the system using it.
A good example of APIs is the services that Amazon offers for various developer needs (https://developer.amazon.com/). Figure 1-3 shows just a few of these services. You must sign up for each API you want to use and Amazon provides you with a special key to use in most cases. Because you’re interacting with Amazon’s servers and not simply downloading code to your own system, the security rules are different when using an API.
Figure 1-3. The Amazon API is an example of code that executes on a host server.
Each API tends to have a life of its own and relies on different approaches to issues such as managing data. Consequently, you can’t make any assumption as to the security of one API when compared to another, even when both APIs come from the same host.
APIs also rely on an exchange of information. Any information exchange requires additional security because part of your data invariably ends up on the host system. You need to know that the host provides the means for properly securing the data you transmit as part of a request. Chapter 7discusses how to work with APIs safely when using them as part of an application development strategy.
Defining the Use of Microservices
Like an API, microservices execute on a host system. You make a request and the microservice responds with some sort of resource—usually data. However, microservices differ a great deal from APIs. The emphasis is on small with microservices—a typical microservice performs one task well. In addition, microservices tend to focus heavily on platform independence. The idea is to provide a service that can interact with any platform of any size and of any capability. The difference in emphasis between APIs and microservices greatly affects the security landscape. For example, APIs tend to be more security conscious because the host can make more assumptions about the requesting platform and there is more to lose should something go wrong.
Current microservice offerings also tend to be homegrown because the technology is new. Look for some of the current API sites to begin offering microservices once the technology has grown. In the meantime, it pays to review precisely how microservices differ by looking at sites such as Microservice Architecture (http://microservices.io/). The site provides example applications and a discussion of the various patterns in use for online code at the moment as shown in Figure 1-4.
Figure 1-4. Microservices are new technology and it helps to see them in light of other usage patterns.
When working with microservices, you need to ensure that the host is reliable and that the single task the microservice performs is clearly defined. It’s also important to consider how the microservice interacts with any data you supply and not to assume that every microservice interacts with the data in the same fashion (even when the microservices exist on the same host). The use of microservices does mean efficiency and the ability to work with a broader range of platforms, but you also need to be aware of the requirement for additional security checks. Chapter 8 discusses how to use microservices safely as part of an application development strategy.
Accessing External Data
External data takes all sorts of forms. Any form of external data is suspect because someone could tamper with it in the mere act of accessing the data. However, it’s helpful to categorize data by its visibility when thinking about security requirements.
You can normally consider private data sources relatively secure. You still need to check the data for potential harmful elements (such as scripts encrypted within database fields). However, for the most part, the data source isn’t purposely trying to cause problems.
• Data on hosts in your own organization
• Data on partner hosts
• Calculated sources created by applications running on servers
• Imported data from sensors or other sources supported by the organization
Paid data sources are also relatively secure. Anyone who provides access to paid data wants to maintain your relationship and reputation is everything in this area. As with local and private sources, you need to verify the data is free from corruption or potential threats, such as scripts. However, because the data also travels through a public network, you need to check for data manipulation and other potential problems from causes such as man-in-the-middle attacks.
There are many interesting repositories online that you could find helpful when creating an application. Rather than generate the data yourself or rely on a paid source, you can often find a free source of the data. Sites such as the Registry of Research Data Repositories offer APIs now so that you can more accurately search for just the right data repository. In this case, you can find the API documentation at http://service.re3data.org/api/doc.
Data repositories can be problematic and the more public the repository, the more problematic it becomes. The use to which you put a data repository does make a difference, but ensuring you’re actually getting data and not something else in disguise is essential. In many cases, you can download the data and scan it before you use it. For example, the World Health Organization (WHO) site shown in Figure 1-5 provides the means to sift through its data repository to find precisely the data you require and then to download just that dataset, reducing the risk that you’ll get someone you really didn’t want.
Figure 1-5. Some repositories, such as the WHO database, are downloadable.
There are many kinds of data repositories and many ways to access the data they contain. The technique you use depends on the data repository interface and the requirements of your application. Make sure you read about data repositories in Chapter 3. Chapters 6, 7, and 8 all discuss the use of external data as it applies to libraries, APIs, and microservices. Each environment has different requirements, so it’s important to understand precisely how your code affects the access method and therefore the security needed to use the data without harm.
Allowing Access by Others
The vast majority of this chapter discusses protection of your resources or your use of data and resources provided by someone else. Enterprises don’t exist in a vacuum. When you create a data source, library, API, microservice, or other resource that someone else can use, the third party often requests access. As soon as you allow access to the resource by this third party, you open your network up to potential security threats that you might not ever imagine. The business partner or other entity is likely quite reliable and doesn’t intend to cause problems. However, their virus becomes your virus. Any security threat they face, becomes a security threat you face as well. If you have enough third parties using your resources, the chances are high that at least one of them has a security issue that could end up causing you problems as well.
Of course, before you can do anything, you need to ensure that your outside offering is as good as you think it is. Ensuring the safety of applications you support is essential. As a supplier of resources, you suddenly become a single point of failure for multiple outside entities. Keeping the environment secure means:
• Testing all resources regularly to ensure they remain viable
• Providing useful resource documentation
• Ensuring third party developers abide by the rules
• Performing security testing as needed
• Keeping up with potential threats to your resource
• Updating host resources to avoid known vulnerabilities
Developers who offer their resources for outside use have other issues to consider as well, but these issues are common enough that they’re a given for any outside access scenario. In addition, you must expect third parties to test the resources to ensure they act as advertised. For example, when offering a library, API, or microservice, you must expect that third parties will perform input and output testing and not simply take your word for it that the resource will behave as expected.
Once you get past the initial phase of offering a resource for third party use, you must maintain the resource so applications continue to rely upon it. In addition, it’s important to assume you face certain threats in making a resource offering. Here are some of the things you must consider:
• Hackers will attempt to use your resource to obtain access to your site
• Developers will misuse the resource and attempt to get it to perform tasks it wasn’t designed to perform
• The resource will become compromised in some manner