Cloud Security - The Enterprise Cloud: Best Practices for Transforming Legacy IT (2015)

The Enterprise Cloud: Best Practices for Transforming Legacy IT (2015)

Chapter 6. Cloud Security

Key topics in this chapter:

§ Cloud security planning and design

§ Governance and operations

§ Multitenant security

§ Security in an automation cloud environment

§ Identity management and federation

§ Data sovereignty and on-shore operations

§ Cloud security standards and certifications

§ Cloud security best practices

In this chapter, I will focus on cloud security planning, system design, governance, and operational considerations. Rather that cover IT security from a general perspective, we will concentrate on areas unique to cloud environments. Information technology security and cloud security are such sweeping and important topics that they could easily require multiple books to cover everything. It is important to understand that all general IT security best practices still apply but few books and industry standards organizations have provided real-world guidance and lessons learned on cloud-specific security. That being said, I recommend reading the National Institute for Standards and Technology (NIST) Special Publication 500-299 as a good baseline cloud-security reference model and detailed specifications. In this chapter, I will focus more on real-world lessons learned and best practices rather than a government-style reference model (I will leave that to NIST and other government organizations).

This chapter is divided into several sections: we’ll take a look at security planning and design, infrastructure security, security standards, and best practices. Let’s dig in.

Cloud Security Planning and Design

As an organization begins to plan for a cloud transition or deployment, certain security-specific considerations should be discussed. These security considerations should be part of the overall cloud planning process and not just a security audit or an assessment after everything is deployed. You should include these topics that are covered in this chapter as part of the appropriate governance, policies, and systems design planning for your cloud.

Planning

The first step to consider is what IT systems, applications, and data should or must remain within a legacy enterprise datacenter. Not all workloads are ideal candidates for transition to the cloud due to data sensitivity concerns, mission criticality, or industry regulations on security and data controls. Security experts need to work with application and business owners to determine which applications and data you can easily move to a cloud and which you should evaluate further or even delay moving to a cloud. You need to repeat this assessment on all key applications and data repositories to develop a priority list with specific notations on the desired sensitivity, regulatory, or other security classifications (Chapter 4 presents a detailed application assessment process). Having this information is critical for the IT and financial business units of any organization to help determine the type of cloud to be used or deployed, calculate initial infrastructure capacity, financial and return-on-investment (ROI) models, and to establish overall operational governance decisions.

Key Take-Away

You should perform assessments of each application and data repository to determine the security posture, suitability, and priority for transition to the cloud. Match security profiles to the cloud architecture, controls, and targeted security compliance standard.

A logical next step in planning is to consider the cloud model(s) to be procured or deployed. Although public cloud services provide solid infrastructure security, they often do not have the level of security or customization your organization might need. A private cloud can be heavily customized to meet security or feature requirements, but you still need to control costs, scope creep, and over-building the initial cloud.

In a hybrid or multivendor cloud, security becomes even more complicated when evaluating, selecting, and using cloud services that are not all at the same level of security or accreditation. If you know from the beginning that your organization is likely to form a hybrid cloud with multiple cloud providers, consider establishing a minimum-acceptable security posture that all providers must meet. Then, evaluate and select certain cloud providers that offer stronger security compliance for mission-critical applications or workloads, allowing the hybrid cloud management system to provision services to the appropriate cloud provider(s). Note that the hybrid cloud management system or a cloud broker might have access to the data stored on each of the multiple cloud providers; thus, the cloud broker or hybrid provider needs to be accredited at a level equal to your highest provider security requirement. For more information on cloud management, hybrid cloud, and brokering, refer to Chapters 7 and 8.

Governance

The first and most obvious consideration is to define who establishes the security posture, policies, and potential accreditation of the system. Based on the assessment of each application and data repository, determine the overall cloud security model including any industry standard you will follow. Remember that you can select a security model or standard with which to be compliant, but you do not necessarily need to go through the actual certification process unless you’re required to by industry or government regulations. Table 6-1, later in this chapter, provides a listing of the most common security standards and organizations.

Organizations should determine who the consumers of the cloud services will be. If multiple departments or peer agencies will be using the cloud service, determine which security team or organization controls the standards for the overall cloud or each application. You might want to form a committee of security experts from each consuming agency, but ultimately some of the overall security strategy — and a tie-breaker for difficult decisions — will need to come from the organization that is leading or sponsoring the cloud effort. I recommend that a baseline security posture be adopted and that other agencies or departments be more involved in settings the security standards for their unique applications and mission-critical workloads.

You should also consider daily security operations, response to events, and ongoing security reviews. Regardless of where the cloud services are hosted (a provider or on-premises enterprise datacenter), you need to determine whether a third party or your internal staff will be responsible for continuous monitoring, incident response, and reporting. Often in a multitenant cloud environment, it is difficult to involve every consuming organization’s security personnel in daily operational activities (described in more detail later), so publishing the security operations process, ownership, and visibility, or statistics, events, and reports should be documented and well understood to ensure acceptance of the cloud by consuming agencies and users.

Multitenant Security

One of the fundamental benefits of a cloud is the ability to host multiple consuming organizations on a shared pool of network, servers, storage, and application resources. Even though public clouds heavily rely on multiple tenants to achieve a profitable scale of economy, a private cloud might or might not be deployed for use by multiple consumer organizations; however, there can still be multiple departments or project/application development teams that desire isolation of cloud services, data, network traffic, or billing and chargeback.

Key Take-Away

Multitenancy features apply to almost all cloud environments.

Public clouds heavily rely on a multiple tenant sales and data isolation model. Private clouds can be deployed for just one organization but there might be multiple departments, project, or application development teams that require isolation.

From a security standpoint, it is critical to understand how multitenancy is configured so that each consuming organization or department is isolated from all the others. The cloud management system, including customer-facing portal and backend automation systems, is the primary mechanism to manage multiple tenants. The cloud management system can integrate or synchronize with an identity-management system that is either hosted by the cloud provider or connected to an internal enterprise directory services such as Active Directory or a Lightweight Directory Access Protocol (LDAP) service (see “Identity Management and Federation” later in this chapter). Using the authenticated user and group hierarchy, you can define multiple organization names in the cloud-management system and create access control list (ACL) groups. Users from the identity-management system fall within one or more of the organizations that are defined or synchronized to the cloud management portal. Each ACL group is then assigned cloud portal permissions for such activities as administration, finance and procurement, approvers, or auditors. When a user or administrator is authenticated into the customer-facing cloud management portal, he only has permissions and a view to cloud services that are assigned to his organization and within the ACL permissions. For detailed information on the cloud portal, ordering, approval, and automation systems, see Chapter 7.

The cloud management system and portal controls which users and ACL groups have access and permissions for ordering and managing services. You can define other unique ACL groups to control access to server remote logon, virtual machines (VMs), applications, and data.

The key aspect to remember here is that all of the aforementioned access controls are software based and do not provide any level of physical separation of servers, VMs, or data in the cloud. In a normal multitenant cloud, there are pools of physical servers and storage that are segregated up using software and permissions to isolate one customer from another. Technically, one physical server will host multiple VMs — each VM being owned by a different customer or department. This VM isolation is managed through ACLs in the cloud management system and the hypervisor platform.

Key Take-Away

Most clouds use software-based access controls and permissions to isolate customers from one another in a multitenant cloud environment. Hardware isolation is an option for some clouds but is also an additional cost.

In a public cloud, the use of software-based access controls, roles-based permissions, storage, and hypervisor separation is commonplace. If more levels of isolation or separation of workloads and data between customers is required, other options such as a virtual private cloud (VPC) or a private cloud are often more suitable. In a VPC, a cloud provider can still host the services, but you can configure your cloud with physically separate networks, physical servers, pools of VMs, and storage. This physical separation, often referred to as an air gap, will normally cost more than cloud services in a multitenant shared environment. For a private cloud, you have the most flexibility to customize how networks, servers, and storage are separated, and if or how multiple consuming organizations are isolated from one another. This is where security experts need to match the value of applications and data security against the cost of the cloud infrastructure to find the right match. One of the biggest steps in making these decisions is a thorough knowledge of multitenancy, identity and authentication systems, hypervisor, and storage security controls. With this knowledge, security personnel will begin to realize that clouds are not inherently less secure than legacy enterprise IT; in fact, they are potentially more secure with more flexibility and more alignment of data protection needs to security policy. This leads us to the next topic.

Is Your Data More or Less Secure in the Cloud?

The basic concept of cloud computing is the transformation and consolidation of traditional compute services, data, and, applications to an on-demand, automated, elastic, often third-party hosted cloud. Customers and security industry experts often ask if their cloud environment is more or less secure than a traditional behind-the-firewall enterprise network. Not surprisingly, there are two polar-opposite opinions among industry experts on this:

Less secure

Some industry experts claim that consolidating all customer data and servers into a cloud means that a successful hacker could have access to massive amounts of proprietary information. This would make the risk of data loss or tampering higher in cloud environments, compared to server farms and applications in traditional enterprise datacenters because the cloud is, in theory, a more attractive and lucrative target.

More secure

The majority of security industry experts agree that the cloud can be more secure than traditional enterprise IT operations. The reason why is because a consolidated location for servers, applications, and data is easier to protect and focus security resources on than traditional enterprise systems — if it is done properly. Public cloud providers and any private cloud owner can procure all of the very latest in security appliances and software, centralize a focused team of security personnel, and consolidate all systems events, logs, and response activities. The quality of security normally increases but at a high capital expense to initially procure the newer and better security systems and staff upgrades. Industry experts point to consolidated focus, and simpler correlation of logs and events as reasons for cloud environments being more secure than most legacy server farms. Predictable and consistent automated service provisioning of VMs, storage, operating systems (OSs), applications, and updates also improves configuration accuracy, rapid updates, and immediate triggering of security monitoring and scanning.

Key Take-Away

Cloud automation brings consistency and real-time updating of security and operational systems, improving configuration control, monitoring, and overall security compared to legacy IT environments.

In a cloud environment, the security and mitigation systems are in place protecting the entire network. Cloud providers have sufficient capability and capacity to monitor and react to real or perceived security issues. However, because this is a multitenant environment, any individual customer with higher-than-normal security requirements might be more difficult and expensive to accommodate in a shared cloud model. This is probably the number one reason for private cloud deployments. A private cloud is dedicated to one customer (or group of departments) that have shared goals and, more important, a shared security posture.

Another concern with multitenant cloud security is transparency to the customer. The security and monitoring systems are designed to consolidate and correlate events across the entire environment with a skilled team of security operations personnel managing and responding. Presenting this event data to individual customers in a multitenant environment is difficult because the security software is often not designed to break out the consolidated and correlated data for each tenant.

Security in an Automated Cloud Environment

One of the major differentiators of a cloud environment versus a modern on-premises datacenter with virtualization is the automation of as many processes and service-provisioning tasks as possible. This automation extends into patching of software, distribution of software and OS updates, and creation of network zones. Each of these automated provisioning processes presents a challenge to traditional security monitoring because the software and hardware environment is constantly changing with new users, new customers, new VMs, and new software instances. A cloud requires equal attention to the automation of asset and operational management; as new systems are automatically provisioned, so to must the security and operational systems learn about the new items in a real-time fashion so that scanning and monitoring of these assets can be initiated immediately.

Automation

The first rule for any cloud is to automate “everything.” You should plan and design a cloud system with as few manual processes as possible. Manual processes are inherently less consistent and inhibit the ability for rapid provisioning of new services or expanded capacity on demand, which is fundamental to a cloud. So, a core security concept — and this might be contrary to ingrained principles of the past — is to avoid any security processes or policies that delay or prevent automation.

As described in Chapter 2, the relentless pursuit of automation brings operational efficiency, consistent configurations, rapid provisioning on demand, elastic scale up and scale down, and support cost savings. This pursuit of all things automated also improves security. Lessons learned since 2010 illustrate that the traditional security processes have tended to be manual approvals, after-provisioning audits, and slow methodical labor-intensive assessments — tendencies that must change when building or operating a cloud. Just as we discussed the cloud as a new style of IT, the new style of cloud security is to assess and precertify all cloud services, applications, VM templates, operations system builds, and so on. I will discuss this in depth throughout the remainder of this chapter.

Key Take-Away

You should adopt the theme “relentless pursuit of automation.” Eliminate any legacy security processes that inhibit rapid provisioning and automation.

As soon as new systems (VMs, applications, etc.) are brought online and added to the asset and configuration management databases, the security management systems should immediately be triggered to launch any system scans and start routine monitoring.

Key Take-Away

There should be little or no delay between provisioning a new system or application in the cloud and beginning security scans and continuous monitoring.

I’m not going to try and sell you on the merits and necessity of continuous security monitoring other than to say you should not manage a cloud without it. In a dynamically changing and automated cloud, continuous monitoring should be combined with continuous updating of asset and configuration databases. This real-time updating of assets and configuration changes will be fed into the security systems whenever new servers, VMs, and applications are launched and need to be scanned and monitored. Without automated updating of asset, configuration, and monitoring systems in real time as cloud services are being provisioned and de-provisioned, it would be almost impossible to keep up (manually or otherwise) with all of the changes to VMs, virtual LANs (VLANs), IP addresses, applications, and so on.

Precertification of VM Templates

Organizations with strict security accreditation processes often struggle with the idea that cloud services should immediately provision new VMs when ordered. I recommend changes in the legacy security process to have the IT security teams precertify all “gold images” or templates that can be launched within new physical devices or VMs. These templates might include a combination of an OS, applications, patches, and agents for network or security monitoring. As soon as the VM is ordered and provisioned by the cloud automation system, the VM or gold image template is copied to a new VM and then booted. Because the security teams have already approved the templates, each new VM that is based on the precertified template should also be considered approved and in compliance. Of course, any future changes to the applications or VM might need to go through additional change control and security scrutiny. One of the best ways to control software application deployment and security management is to create and certify automated application installation packages that can be deployed in combination with VM templates

Certification of gold images is not just an initial step when using or deploying a new cloud. Many organizations and customers will request that existing or future gold images — homegrown or commercial off-the-shelf (COTS) applications and configurations — be loaded and added to the cloud service catalog. I highly recommend that security experts perform scanning and assessments of every new or modified gold image before loading it into the cloud management platform and giving customers the ability to order it. Again, using a combination of VM templates and smaller application installation packages — all precertified by security — will reduce the frequency of having to update the master VM gold image. Also realize that when a new gold image is accepted and added to the cloud, the cloud operational personnel (depending on contract scope) might now be responsible for all future patches, upgrades, and support of the template. Many cloud providers charge a fee to assess and import customer VMs or gold images. Customers might push back on this extra cost, so you should take the time to explain the need for these manually intensive assessments and the ongoing upgrades and support required.

Precertification of Network Zones and Segmentation

Most cloud services, such as a VM or applications running on a VM require, require a default network or virtual network connection as part of the automated configuration. You can configure VMs with multiple virtual network interfaces and connect to one or more production or nonproduction network segments with your datacenter. These network configurations include the VM configuration when your security team performs its precertification.

You might want to offer additional network segmentation as an option through the use of virtual firewalls and VLANs to secure or isolate networks. Applications that need to be Internet-facing should be further segmented and firewalled from the rest of the production cloud VMs and applications. Platform as a Service (PaaS) offerings are very often configured with multiple tiers of VMs and applications that interact and can have several network zones to protect web-facing frontend servers from middleware and backend databases that all form the enterprise application.

Key Take-Away

Pre certify all production and nonproduction network segments so that VMs can be provisioned automatically without manual security approval processes. Also consider preapproving a pool of optional virtual networks that can also be provisioned automatically upon a customer order.

One specific caution here is to not overdo the default segmentation of networks, because this only complicates the offerings and usefulness of the cloud environment, and increases operational management. Stick with some basic level of network segmentation such as a default single network per customer, and then allow customers to request additional network segments. I recommend precertifying several network VLANs, firewall port rules, load balancers, and storage options and make these available to cloud consumers via the self-service control panel. By precertifying several options, possibly charging customers extra for these, you can still offer your customers flexibility and rapid provisioning by having all these options already vetted and certified by security personnel. Remember, customers will often seek a future VLAN or opening of firewall ports that go behind the precertified configuration, meaning that these can still be handled by a less-than-automated approval or vetting process.

Precertification Applications

As just mentioned, security precertification also extends to all applications and future updates that will be available on the cloud. You should configure applications as automated installation packages where any combination of application packages can be ordered and provisioned and then installed on top of a VM gold image. By separating the VM gold image from application installation packages, you can reduce the number of VM image variations and frequency in updating VM images (compared to fully configured VM images that include applications). Additional packages for upgrades and patching of the OS and apps will also be deployed in an automated fashion to ensure efficiency, consistency, and configuration management. Remember that customers and cloud administrators can also uninstall application packages on demand using the cloud management system orchestration tools.

Key Take-Away

Use a combination of security-approved VM templates and application installation packages. Reduce the quantity of VM image variations and frequency of updates by separating the OS image from the applications.

The point here, just as with VM gold images and network zones, is to precertify everything to facilitate automated deployment — avoid forcing any manual security assessments in the provisioning process. You need to realize that this precertification is not necessarily difficult, but it will be an ongoing effort because new applications and update packages are continuously introduced. Finally, you should understand that more complex multitiered applications (e.g., multitiered PaaS applications) will require significantly more security assessment and also involvement in the initial application design process. If security experts are not involved with the initial multitiered application design, trying to map multiple production-ready application tiers to the automated and precertified network segments or VLANs can be a nightmare.

Asset and Configuration Management

Although I covered asset and configuration management in Chapter 2, from a security standpoint, there are specific processes that should be put into place to support an automated cloud environment.

Many organizations have a mature asset and configuration-management system in place. In a private cloud environment that uses automated provisioning, the key to success is to also automate the updating of asset and configuration databases. This means that you configure the cloud management platform, which controls and initiates automation, to immediately log the new VM, application, or software upgrades into the asset/configuration database(s). Because this is done through automation, there is little chance that updating the asset or configuration databases is skipped and the accuracy of the data will be improved when compared to legacy manual update procedures.

Key Take-Away

The overall goal is to have all inventory, monitoring, and security systems updated in real time so that network, security, and operations teams are continuously monitoring the current state of the environment and all its assets and continuously changing configurations.

Some organizations have very formal configuration control approval procedures and committees in place. Although the need for these is understood, the concept of a manual approval process and committee are contrary to the tenets of cloud automation and rapid provisioning (which includes routine software updates). I recommend that the change control, just as with security, be changed to include preapproving new application patches, upgrades, gold images, and so on to allow the cloud automation system to perform its rapid provisioning responsibilities. As new systems are deployed in an automated manner, so to will the configuration log and database be updated in real time. These automated configuration changes, which are based on preapproved packages or configurations, should be marked as “automatically approved” in the change control log, fulfilling the purpose of a change control log as an auditing tool. In this case, the change log entries are automatically entered, but there will likely be other more significant infrastructure configuration changes throughout the cloud that can and should still follow the manual change control boardprocess.

Customer Visibility into Security and Operations

In a public cloud, customers no longer need to allocate precious staff, funds, or time to work on routine systems administration and upkeep of the network. This includes security operations, continuous monitoring, and responding to security events and threats; however, some public cloud customers still want visibility into their hosted cloud.

Public cloud providers were initially reluctant to provide much visibility into what was considered internal operations. As customers have adopted public cloud and managed private cloud services, customers quickly realized that they were effectively blind to the operations that could affect their data and cloud-based systems. Providers periodically provided customers with a summary report, but they almost never provided real-time security event and mitigation information. Customers of any cloud want more visibility into events, alerts, threats, and remediation activities relating to their data and their cloud services.

A private cloud model and management system is far more customizable and therefore capable or integrating with existing or new security software to provide real-time dashboard, statistics, and event alerts and mitigation data.

Key Take-Away

One of the major reasons enterprise customers deploy a private cloud is to have in-depth visibility and engagement with cloud security monitoring, events, and mitigation.

Enterprise customers often want to be aware of security events and remediation, whereas others just want visibility but still remain hands off to the hosted cloud-based activities. The challenge is that most network monitoring and security systems are focused on consolidating triggers, alerts, and critical system events. The data is aggregated and then correlations of multiple related events found, which leads to earlier and more complete detection of the overall event or threat. In a multitenant cloud environment, these same advanced aggregation, analytics, and correlation tools are not perfectly suited to then separate the data for distribution to individual tenants or consuming organizations. This is a primary reason why visibility and real-time access to security monitoring and events is a challenge for many cloud providers and cloud management systems. In a private cloud deployment, multitenancy is not as much of an issue, so there are more tools and options for presenting security monitoring to appropriate personnel within the consuming organization.

Experience has shown that customers do not trust that everything is OK. They don’t want to depend solely on a monthly report that shows past events, threats, or vulnerabilities. Real-time monitoring and visibility into systems and security operations is clearly desired by cloud consumers. In the future, I expect to see better tools and improved customer-facing dashboards that provide roles-based configurable levels of continuous monitoring information — we’ll even see these with some public cloud providers in the near future.

Identity Management and Federation

User identity management, synchronization of directory services, and federation across multiple networks is very unique in a cloud environment compared to traditional enterprise IT. Chapter 4 covers identity management and authentication for application migrations. Here, I discuss single sign-on and federation, which are unique security considerations in a cloud environment.

Single Sign-On

With the single sign-on (SSO) model, applications or other resources in the cloud use the same logon information that an end user provides when she logs on to her computer, precluding the need to prompt the user for any additional logon information. This is done through a variety of techniques depending on the desktop OS, the applications, the network infrastructure, and possibly third-party software that specifically enables this functionality.

Within a traditional local area network (LAN) hosted by an enterprise organization in its own facility, having a single network authentication system, such as Microsoft Active Directory, is not very difficult; in fact, it is a built-in feature of the Microsoft Windows Server OS. LDAP is a more universal industry standard for user directory services and authentication that is not specific to any software manufacturer. Security Assertion Markup Language (SAML) is an even better solution for cloud environments when SSO and federation are used. The challenge is when users access data on multiple server farms, across wide area networks (WANs), and on multiple applications created by different software manufacturers. As you implement cloud services, this becomes even more complex.

A cloud service provider can only do so much to enable SSO from their facilities. There are cloud providers that implement third-party software solutions that broker authentication to downstream applications and networks. This requires each cloud customer and application to integrate with the centralized authentication system that the cloud provider has chosen. There are numerous identity and authentication systems available in the industry that either the cloud provider might have available for customer use, or a customer can deploy its own within its VM. So, there is no one answer to implementing SSO; however, LDAP and SAML are the primary industry standards. All applications and OSs that you want to integrate with the cloud or migrate to the cloud should support one or both of these protocols.

Federation

One area related to SSO and identity management is federation, also called Federated Identity Management (FIM). Federation is when you connect multiple clouds, applications, or organizations to one or more other parties (see Figure 6-1). The list of users and authentication details are shared securely across all parties of the federation. This makes possible features such as allowing one organization to see another organization in a Global Address List (GAL), or sending an instant message to another person across organizations. The federation software creates and maintains a bridge between the disparate networks and applications, effectively synchronizing and/or sharing user lists between one or more organizations. In a cloud environment with distributed applications, data, and users located potentially all over the world, federation and SSO is what makes this seamless experience possible. Your average daily tasks performed in the cloud might actually involve logging on to a dozen applications, databases, networks, and cloud providers, but all of this is transparent to you due to federation and SSO.

An overview of Federated Identity Management

Figure 6-1. An overview of Federated Identity Management

Customer Accreditation of Cloud Services

It is difficult — if not impossible — to get a public cloud provider to give an individual customer access to the provider’s network and allow customer IT security staff to perform an accreditation. In fact, to show customers what was happening inside the networks can be considered paramount to showing customers — and potentially competitors — your intellectual property, with too much visibility into the internal security systems and procedures. Although most public cloud providers rarely allow individual customer inspection and accreditation, providers have in some cases allowed a third-party assessment so that the public cloud provider can sell its services to government and other large customers with requirements for an official security accreditation. The U.S. government’s FedRAMP accreditation process, which uses third-party assessment vendors, is an excellent example of this approach. For more details on FedRAMP and other international security entities, see“Cloud Security Certifications” later in this chapter.

A private cloud deployment is much more accommodating and suitable for customer accessibility and a security accreditation process. The security standards and accreditation process are the same or very similar for a public cloud, with any multitenant cloud getting the highest level of scrutiny for security controls and customer data isolation.

As part of planning your organization’s transition to cloud, you need a complete understanding of the cloud models, the security standards that you need to follow, and the personnel who will perform the security accreditation. When procuring a public cloud service, your evaluation criteria should include the designed security accreditation. For a private cloud deployment, ensure that your organization or the systems integrators that does the deployment is capable and experienced in highly secure cloud computing and already has security accreditation experience. Finally, remember that security accreditations normally require annual reassessments and certification renewals (or perhaps on some other time interval). As most public and private clouds mature and add new capabilities over time, these periodic accreditations are not just a quick “rubber stamp” process but involve assessing the entire system again with particular attention to the new services or configuration changes.

Data Sovereignty and On-Shore Support Operations

Data sovereignty refers to where your data is actually stored geographically in the cloud — whether it is stored in one or more datacenters hosted by your own organization or by a public cloud provider. Due to differing laws in each country, sometimes the data held by the cloud provider can be obtained by the government in whose jurisdiction the data is stored, or perhaps by the government of the country where the data provider is based, or even by foreign governments through international cooperation laws. Further government monitoring or snooping (some governments tend to change laws or push the bounds of legality to serve their own purposes) on behalf of crime prevention agencies has also become a concern.

Not everything here is doom and gloom. There are “safe harbor” agreements between key governments such as the United States and the European Union to better enforce data privacy and clarify specific scenarios and data types that can legally be turned over by a cloud provider upon official requests. Organizations using public cloud services should examine the policies and practices of a prospective cloud provider to answer the following questions:

§ Where will data, metadata, transaction history, personally identifiable data, and billing data be stored?

§ Where will backups or replicated data for disaster recovery be located? What is the retention policy for legacy data and backups? How is retired data media securely disposed?

§ Who and where support personnel are located and to what do they have access? How are their personal background checks performed?

§ Where is the provider’s primary headquarters, location of incorporation, and under which laws and jurisdictions do they fall? How does the provider respond to in-country or foreign government requests for data discovery?

§ Is the government authority or third party obligated to notify you that it has taken possession of your data.

Data sovereignty and data residency has become a more significant challenge and decision point than most organizations and cloud service providers originally anticipated. Initially, one of the selling points of the cloud that a cloud service provider would point out was that you, as the customer, didn’t need to be concerned with where and how it stored your information — there was an SLA to protect you. Lessons learned are to now ask or contractually force your cloud provider to store your data in the countries or datacenter locations that fit your data sovereignty requirements. Also consider if you require that all operational support personnel at the cloud provider be located within your desired country and be local citizens (preferably with background checks performed regularly) — this in combination with data sovereignty will help to ensure that your data remains private and is not unnecessarily exposed to foreign governments or other parties with whom you did not intend to share it.

Key Take-Away

You should request that data be stored in the country of your choosing to maintain your data privacy rights. Many public cloud providers now offer these options and this is definitely a consideration for building your own private or hybrid cloud environment.

If you are a private cloud operator, you should not only have published policies to address these concerns, but also consider formal written internal policies, such as the following:

§ All staff must know the policies with regard to when and if to respond to government and other requests for data release.

§ Staff must be fully versed in all data retention policies and procedures for data retirement.

§ There must be a clearly articulated policy for cloud data locations, replications, and even temporary data restorations or replications in order to maintain data sovereignty for customers with such requirements and contracts.

§ An internal policy review committee must be established as well as a channel into corporate legal department for handling each official data request and overall policy governance.

§ A documented plan should be in place for how to handle document requests and other legal events that might occur — be specific with respect to law and government identities and how each will be handled.

Cloud Security Certifications

There are dozens of government institutions in the U.S. and worldwide that have published computer security guidance. U.S. government customers often mandate these security specifications, but these are also excellent guidelines for non government clouds, as well.

There are also a significant number of security policies that come from U.S. government organizations, and certain industries such as healthcare and finance are required to follow them. Commercial and government agencies are required to implement these security standards and often go through a formal security accreditation process before their computer systems can go online.

Table 6-1 lists many of the organizations that have created security standards or accreditation criteria. This is not an exhaustive list, and new security policies are introduced frequently. Use this as a starting point to identify which cloud and security standards your organization wants — or is required — to follow when procuring or building a cloud service.

STANDARD OR ORGANIZATION

DESCRIPTION

Cloud Security Alliance (CSA)

The CSA Security, Trust & Assurance Registry (STAR) initiative was launched in 2011 to improve transparency and assurance in the cloud. STAR is a publicly accessible registry that documents the security controls provided by various cloud computing offerings, thereby helping companies to assess the security of cloud providers they currently use or with which they are considering contracting. STAR consists of three certification levels that are based on ISO 2001 and CSA’s Cloud Controls Matrix (CCM) standards:

§ Level 1 CSA STAR Self Assessment

§ Level 2 CSA STAR Certification/Level 2 CSA STAR Attestation

§ Level 3 CSA STAR Continuous

EuroCloud Star Audit (ECSA)

EuroCloud is an independent nonprofit organization focused on cloud security standards with voluntary participation by most European countries. The EuroCloud Star Audit is a 1-to-5 star-graded certification suitable for any company operating an Infrastructure as a Service (IaaS), Platform as a Service (PaaS), or Software as a Service (SaaS).

E-Government Act

The United States E-Government Act (Public Law 107-347) was passed by the 107th Congress in 2002. It is focused on the importance and impact of information security on the economic and national security interests of the United States.

FISMA

Title III of the E-Government Act, titled the Federal Information Security Management Act (FISMA), requires each United States federal agency to develop, document, and implement an agency-wide program to provide information security for the systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. There are three levels of FISMA controls and compliance: Low, Moderate, and High.

FIPS

The U.S. Federal Information Processing Standards (FIPS) provide guidance and minimum characteristics for areas such as data encryption and IT security (many U.S. federal agencies are required to adhere to these policies):

§ FIPS 199 Standards for Security Categorization of Federal Information and Information Systems

§ FIPS 200 Minimum Security Requirements for Federal Information and Information Systems

§ FIPS 140-2 Security Requirements for Cryptographic Modules

FedRAMP

The U.S. Federal Risk and Authorization Management Program (FedRAMP) is a U.S. government-wide program, created by the General Services Administration (GSA) that provides a standardized approach to security assessment, authorization, and continuous monitoring of cloud products and services.

Note that many U.S. government agencies still use FISMA as a standard for their computer security accreditation, but FISMA is an older, more generic law and not as specific to the cloud as FedRAMP.

FedRAMP was originally intended to apply to U.S. federal government agencies and public cloud providers offering services to these agencies. Although the basic principles and guidance within FedRAMP is valuable to all cloud environments, it is unclear if or when FedRAMP will be extended to cover other forms of cloud including private clouds.

The U.S. Department of Defense (DoD) organizations usually follow more robust cloud security guidelines published by the U.S. Defense Information Systems Agency (DISA). DISA has recently added the concept (not an actual standard yet) called FedRAMP+, which calls for additional security controls and requirements — on top of the basic FedRAMP — to meeting DoD cloud security requirements.

National Institute of Standards and Technology (NIST)

NIST is responsibilities for publishing computer standards and guidance under FISMA. There are numerous general information technology and cloud-specific “special publications” published by NIST. Some of the more relevant documents are the following:

§ Special Publication 500-291 v2. Defines Cloud Computing Standards Roadmap. This updated publication was released in July 2013, and provides a significant number of updated cloud brokering, hybrid cloud, portability, and security standards.

§ Special Publication 500-292. Defines the Cloud Computing Reference Architecture.

§ Special Publication 800-146. NIST, which is a part of the U.S. Department of Commerce, has issued a document, titled Cloud Computing Synopsis and Recommendations. This document provides definitions of cloud deployment models, cloud characteristics, and recommendations. Although NIST is a government organization, these standard definitions are often utilized throughout the world.

§ Special Publication 800-12. This document provides a broad overview of computer security and control areas. It also emphasizes the importance of security controls and ways to implement them. Initially this document was aimed at the federal government, although you can apply most practices outlined in it to the private sector as well. Specifically, it was written for those people in the federal government responsible for handling sensitive systems.

§ Special Publication 800-14. This document describes common security principles, providing a high-level description of what should be incorporated within a computer security policy. It describes what can be done to improve existing security as well as how to develop a new security practice.

§ Special Publication 800-26. This document provides advice on how to manage IT security, emphasizing the importance of self-analysis and risk assessments.

§ Special Publication 800-37. This document, titled Guide for Applying the Risk Management Framework to Federal Information Systems and updated in 2010, provides a new risk approach.

§ Special Publication 800-53. This document is titled Guide for Assessing the Security Controls in Federal Information Systems. It was updated in August 2009, and it specifically addresses the 194 security controls that are applied to a system to make it “more secure.”

STIG

A Security Technical Implementation Guide (STIG) is a methodology for standardized secure installation and maintenance of computer software and hardware. The term was coined by DISA, which creates configuration documents in support of the U.S. Department of Defense (DoD). The implementation guidelines include recommended administrative processes and span the device’s lifecycle.

ISO

The International Organization for Standardization (ISO) is similar to NIST, but it is a widely accepted international, non government entity, whereas NIST is primarily used by U.S. government organizations.

ISO 27002/ISO 27K1

The ISO 27001 document provides underlying information security management system standards and taxonomy, with ISO 27002 providing best-practice recommendations on information security management for use by those responsible for initiating, implementing, or maintaining Information Security Management Systems (ISMS). Information security is defined within the standard in the context of the C-I-A triad, a preservation of confidentiality (ensuring that information is accessible only to those authorized to have access), integrity (safeguarding the accuracy and completeness of information and processing methods), and availability (ensuring that authorized users have access to information and associated assets when required).

IT Grundschutz

A security certification scheme created by the German government’s Federal Office for Information Security (BSI), which provides baseline framework and a basic list of security requirements although not specific to cloud.

OBM A-130

The Office of Management and Budget (OMB) through Circular A-130, Appendix III, Security of Federal Automated Information Resources, requires executive agencies within the federal government to (a) plan for security; (b) ensure that appropriate officials are assigned security responsibility; (c) periodically review the security controls in their information systems; and (d) authorize system processing prior to operations and, periodically, thereafter.

OMB M-07-16

Defines the “Safeguarding Against and Responding to the Breach of Personally Identifiable Information.”

HIPAA

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) defines privacy and security rules protecting individually identifiable health information. Any system that processes or stores personally identifiable health information must adhere to these security regulations.

HSPD-7

Homeland Security Presidential Directive (HSPD-7) defines Critical Infrastructure Identification, Prioritization, and Protection standards.

PCI DSS

The PCI DSS is a multifaceted security standard that includes requirements for security management, policies, procedures, network architecture, software design and other critical protective measures. This comprehensive standard is intended to help organizations proactively protect customer account data.

FERPA

The U.S. Education Department is releasing a Notice of Proposed Rule Making (NPRM) under the Family Educational Rights and Privacy Act (FERPA). The proposed regulations would give states the flexibility to share data to ensure that taxpayer funds are invested wisely in effective programs.

COPPA

Children’s Online Privacy Protection Act of 2000 applies to the online collection of personal information from children under 13. The new rules define what a website operator must include in a privacy policy, when and how to seek verifiable consent from a parent, and what responsibilities an operator has to protect children’s privacy and safety online.

CIPA

The Children’s Internet Protection Act (CIPA) is a federal law enacted by congress to address concerns about access to offensive content over the Internet on school and library computers. CIPA imposes certain types of requirements on any school or library that receives funding for Internet access or internal connections from the E-rate program — a program that makes certain communications technologies more affordable for eligible schools and libraries.

DIACAP

The U.S. Department of Defense (DoD) Information Assurance Certification and Accreditation Process (DIACAP) is the DoD process to ensure that risk management is applied on information systems. DIACAP defines a DoD-wide formal and standard set of activities, general tasks and a management structure process for the certification and accreditation (C&A) of a DoD IS that will maintain the information assurance (IA) posture throughout the system’s life cycle.

Note that the directive to DoD agencies is to begin using the newer DoD Cloud Computing Security Requirements Guide (SRG) defined by DISA. This SRG details the latest cloud security standards along with correlation back to the DIACAP, FedRAMP, and other DoD IT and cloud security and risk frameworks.

Defense Information Systems Agency (DISA) SRG

The Defense Information Systems Agency (DISA) is the U.S. Department of Defense (DoD) organization that governs IT standards, guidance, and provides some centralized IT services to DoD organizations. DISA has produced several cloud security guidelines with the latest called DoD Cloud Computing Security Requirements Guide (SRG), initially published in January 2015, which replaces the previous Cloud Security Model (CSM). The SRG defines roles and standards for private cloud and external cloud service providers based on four Impact Levels. This is a relatively new standard and likely to mature over time, this DISA SRG is more specific to the cloud than the DIACAP standard for overall DoD IT. Refer to DoD Cloud Computing SRG at www.disa.gov.

This DISA standard is intended for DoD organizations and it contains more security controls and requirements than FedRAMP or FISMA. The four SRG levels are as follows (paraphrased and translated to remove DoD terms and extraneous jargon for this book):

§ Level 2: All data cleared for public release; includes some private unclassified information not considered controlled or mission critical; some level of minimal access control required.

§ Level 4: Data that must be controlled or is mission critical; data that is protected under law or policy and requires protection for unauthorized disclosure. Controlled data includes export controlled, personally identifiable information, protected health information, law enforcement, and for official use classifications.

§ Level 5: Controlled and mission critical data that requires a higher level of protection as determined by the information owner, public law, or other government regulations.

§ Level 6: Classified data to include compartmented information; data classified due to Executive Orders, or other policies. This impact level might be suitable for the highest-level intellectual property, trade secrets, and similar data that would result in grave harm to an organization if data is lost or compromised.

Common Criteria/ISO 15408

A framework that provides assurance that the process of specification, implementation, and evaluation of a computer security product has been conducted in a rigorous and standard manner. The Common Criteria for Information Technology Security Evaluation (CC), and the companion Common Methodology for Information Technology Security Evaluation (CEM) are the technical basis for an international agreement, the Common Criteria Recognition Arrangement (CCRA), which ensures the following:

§ Products can be evaluated by competent and independent licensed laboratories so as to determine the fulfillment of particular security properties, to a certain extent or assurance.

§ Supporting documents are used within the Common Criteria certification process to define how the criteria and evaluation methods are applied when certifying specific technologies.

§ The certification of the security properties of an evaluated product can be issued by a number of Certificate Authorizing Schemes, with this certification being based on the result of their evaluation.

§ These certificates are recognized by all the signatories of the CCRA.

ETSI CSC

The European Telecommunications Standards Institute (ETSI) is an independent, nonprofit standards organization for the telecommunications industry in Europe. ETSI produces globally applicable standards for Information and Communications Technologies (ICT), including fixed, mobile, radio, converged, broadcast, and Internet technologies. ETSI has commissioned the Cloud Standard Coordination initiative to define and develop cloud standards including nomenclature, terms, operational roles, and use cases for cloud computing.

Table 6-1. Security standards and organizations

Cloud Security Best Practices

Based on lessons learned and experience from across the cloud industry, you should consider the following best practices for your organization’s planning.

Planning

As an organization plans for transitioning to a cloud service or deploying a private or hybrid cloud, the first step from a security standpoint is to consider what IT systems, applications, and data should or must remain within a legacy enterprise datacenter. Here are some considerations:

§ Perform assessments of each application and data repository to determine the security posture, suitability, and priority to transition to the cloud. Match security postures to the cloud architecture, controls, and target security compliance standard.

§ Work with application and business owners to determine which applications and data you can move easily to a cloud and which you should evaluate further or delay moving to a cloud. Repeat this assessment on all key applications and data repositories to develop and priority list with specific notations on the desired sensitivity, regulatory, or other security classifications.

§ Consider the cloud model(s) to be procured or deployed internally:

§ Although public cloud services provide solid infrastructure security, they often do not have the level of security or customization you may need.

§ A private cloud can be heavily customized to meet security or feature requirements, but you need to control costs, scope creep, and over-building your initial cloud.

§ Determine who the consumers of the cloud services will be. If multiple departments or peer agencies will be using the cloud service, determine which security team or organization controls the standards for the overall cloud or each application workload:

§ Adopt a baseline security posture so that individual consumers or peer agencies will be more involved in settings the security standards for their unique applications and mission critical workloads.

§ Publish the security operational processes, ownership, and visibility or statistics, events, and reports to ensure acceptance of the cloud by consuming agencies and users.

Multitenancy

Most clouds use software-based access controls and permissions to isolate customers from one another in a multitenant cloud environment. Hardware isolation is an option for private clouds and some virtual private clouds, but at additional cost.

§ Understand how multitenancy is configured so that each consuming organization is isolated from all the others. In a public cloud, the use of software-based access controls, roles-based permissions, storage, and hypervisor separation is commonplace. If more levels of isolation or separation of workloads and data between customers is required, other options such as a virtual private cloud or a private cloud are often more suitable.

§ Implement or connect an enterprise identity management system such as Active Directory, LDAP, or SAML service. Some cloud providers and management platforms can optionally connect to multiple directory or LDAP services — one for each consuming organization.

Automation in a Cloud

The first rule in an automated cloud is to plan and design a cloud system with as few manual processes as possible. This might be contrary to ingrained principles of the past, but you must avoid any security processes or policies that delay or prevent automation. Here are some considerations:

§ Adopt the theme “relentless pursuit of automation.”

§ Eliminate any legacy security processes that inhibit rapid provisioning and automation.

Experience has shown that traditional security processes have tended to be manual approvals, after-provisioning audits, and slow methodical assessments — tendencies that must change when building or operating a cloud. Precertify everything to allow automated deployment — avoid forcing any manual security assessments in the provisioning process.

§ Have IT security teams precertify all “gold images” or templates that can be launched within new VMs. Certification of gold images is not just an initial step when using or deploying a new cloud.

§ Have security experts perform scans and assessments of every new or modified gold image before loading it into the cloud management platform and presenting it for customers to order.

§ Understand that when a new gold image is accepted and added to the cloud, the cloud operational personnel (provider or support contractor, depending on contractual terms) might now be responsible for all future patches, upgrades, and support of the template.

§ Have security precertify all applications and future updates that will be available on the cloud. You should configure applications automated installation packages whereby any combination of application packages can be ordered and provisioned on top of a VM gold image. Additional packages for upgrades and patching of the OS and apps will also be deployed in an automated fashion to ensure efficiency, consistency, and configuration management.

§ Realize that this precertification is not so difficult of a task but will be an ongoing effort as new applications and update packages are introduced to the cloud often and continuously. Finally, understand that more complex multitiered applications (e.g., multitiered PaaS applications) will require significantly more security assessment and involvement during the initial application design.

It is common for customers to request additional network configurations or opening of firewall ports. These can be handled through a manual vetting, approval, and configuration process, but you might want to charge extra for this service. Here are some things to keep in mind:

§ Segment the network so that each customer (not VM, which is often overkill), at a minimum, has its own virtual network. This is better than physical networks for each customer which is difficult to automate and more expensive.

§ You can offer additional network segmentation as an option for each tenant or customer organization by using virtual firewalls to isolate networks. Applications that need to be Internet-facing should be further segmented and firewalled from the rest of the production cloud VMs and applications.

§ Avoid overdoing the default segmentation of networks, because this only complicates the offerings and usefulness of the cloud environment, and increases operational management. Stick with some basic level of network segmentation such as the one virtual network per customer by default and then offer upgrades only when necessary to create additional virtual networks.

§ Consider precertifying a pool of additional VLANs, firewall port rules, load balancers, and storage options and make these available to cloud consumers via the self-service control panel.

Asset and Configuration Management

The key to success is to also automate the updating of asset and configuration databases. This means that you configure the cloud management platform, which controls and initiates automation, to immediately log the new VM, application, or software upgrade into the asset and configuration databases. Here are some considerations:

§ Reconsider all manual approval processes and committees that are contrary to cloud automation and rapid provisioning (which includes routine software updates).

§ Update the legacy change control process by preapproving new application patches, upgrades, gold images, and so on so that the cloud automation system can perform rapid provisioning.

§ Integrate the cloud management system to automatically update the configuration log/database in real-time as any new systems are provisioned and launched. These automated configuration changes, which are based on preapproved packages or configurations, should be marked as “automatically approved” in the change control log.

Monitoring and Detection Outside Your Network Perimeter

Traditional datacenter and IT security had a focus on monitoring for threats and attacks of the private network, datacenter, and everything inside your perimeter. Cloud providers should increase the radius of monitoring and detection to find threats before they even find or hit your network. Here are some things to keep in mind:

§ Traditional web hosting services and content delivery networks (CDNs) are a good fit to host, protect, and cache static web content, but many of these providers do not protect dynamic web content (logons, database queries, searches) so all inbound attackers need to do is perform a repetitive search every millisecond and your CDN network can do little about it because it must forward all requests to your backend application or database.

§ Consider a third-party network hosting service in which all data traffic to your cloud infrastructure first goes through the provider’s network and filters. This provider will first take the attacks from the Internet and forward only legitimate traffic to your network. There is a significant number of configurable filtering and monitoring options available from these providers. In addition, consider using these providers for all outbound traffic from your cloud — thus, truly hiding all of your network addresses and services from the public Internet.

§ Consider a third-party provider of secure DNS services that has the necessary security and denial-of-service protections in place. As this provider hosts your DNS services, your internal DNS servers are not the attack vector by having this third-party DNS provider take the brunt of an attack and forward only legitimate traffic.

Consolidated Data in the Cloud

As discussed in this chapter, many customers are concerned that data consolidated and hosted in the cloud might be less secure. The truth is that having centralized cloud services hosted by a cloud provider or your own IT organization enables a consolidation of all the top-level security personnel and security tools. Most organizations would rather have this concentration of expertise and security tools than a widely distributed group of legacy or mediocre tools and skillsets. Here are some considerations:

§ Technically, a cloud service has no extra vulnerabilities compared to a traditional datacenter, given the same applications and use cases. The cloud might represent a bigger target because data is more consolidated, but you can offset this by deploying the newest security technologies and skilled security personnel.

§ Continuous monitoring is the key to good security. Continuous monitoring in the cloud might mean protecting and monitoring multiple cloud service providers, network zones and segments, and applications.

§ Focus monitoring and protections not only at your network or cloud perimeter, but begin protections before your perimeter (see “Monitoring and Detection Outside Your Network Perimeter”). Don’t forget monitoring your internal network, because a significant number of vulnerabilities still come from internal sources.

§ Focus on zero-day attacks and potential threats rather than relying solely on pattern or signature-based security that only contains past threats. Sophisticated attackers know that the best chance of success is to find a new vector into your network, not an older vulnerability that you’ve probably already remedied.

Continuous Monitoring

As soon as new systems are brought online and added to the asset and configuration management databases (as described earlier), the security management systems should immediately be triggered to launch any system scans and start routine monitoring. There should be little or no delay between a new system being provisioned in the cloud and the beginning of security scans and continuous monitoring. Monitoring of the automated provisioning, customer orders, system capacity, system performance, and security are critical in a 24-7, on-demand cloud environment. Here are some considerations:

§ All new applications, servers/virtual servers, network segments, and so on should be automatically registered to a universal configuration database and trigger immediate scans and monitoring. Avoid manually adding new applications or servers to the security, capacity, or monitoring tools to ensure that continuous monitoring begins immediately when services are brought online through the automation processes.

§ Monitoring of automated provisioning and customer orders is critical in an on-demand cloud environment. Particularly during the initial months of a private cloud launch, there will be numerous tweaks and improvements needed to the automation tools and scripts to continuously remove manual processes, error handling, and resource allocation.

§ Clouds often support multiple tenants or consuming organizations. Monitoring and security tools often consolidate or aggregate statistics and system events to a centralized console, database, and support staff. When tracking, resolving, and reporting events and statistics, the data must be segmented and reported back to each tenant such that they only see their private information — often the software tools used by the cloud provider have limitations in maintaining sovereignty of customer reports to multiple tenants.

§ There are three key tenets of continuous monitoring:

Aggregate diverse data

Combine data from multiple sources generated by different products/vendors and organizations in real time.

Maintain real-time awareness

Utilize real-time dashboards to identify and track statistics and attacks. Use real time alerting for anomalies and system changes.

Create real time data searches

Develop and automate searches across unrelated datasets to identify the IP addresses from which attacks were originating. Transform data into actionable intelligence by analyzing data to identify specific IP addresses from which attacks originated and terminated hostile traffic.

Denial-of-Service Plan

Denial-of-Service (DoS) attacks are so common that it is a matter of when and how often, not if, your cloud is attacked. Here are some recommendations:

§ Try to isolate your inbound and outbound network traffic behind a third-party provider that has DoS protections, honey pots, and dark networks that can absorb an attack and effectively hide your network addresses and services from public visibility (see “Monitoring and Detection Outside Your Network Perimeter”).

§ Have a plan for when a DoS attack against your network occurs. Perhaps you will initiate further traffic filters or blocks to try and redirect or block the harmful traffic. Maybe you have another network or virtual private network (VPN) that employees and partners can revert to during the attack and still access your cloud-based services. Remember that the time to find a solution for a DoS attack is before one occurs — after you are experiencing a DoS attack, your network and services are already so disrupted that it is much more difficult to recover.

Global Threat Monitoring

Consider implementing security tools, firewalls, and intrusion detection systems that subscribe to a reputable worldwide threat management service or matrix. These services detect new and zero-day attacks that might start somewhere across the globe and then transmit the patch, fix, or mitigation of that new threat to all worldwide subscribers immediately. Thus, everyone subscribed to the service is “immediately” immune from the attack even before the attack or intrusion attempt was ever made to your specific network. These services utilize some of the world’s best security experts to identify and mitigate threats. No individual cloud provider or consuming organization can afford the quantity and level of skills as these providers have.

Change Control

Legacy change control processes need to evolve in an automated cloud environment. When each new cloud service is ordered and automated provisioning is completed, an automated process should also be utilized to process change controls that can also feed or monitor be security operations. Here are some recommendations:

§ Avoid all manual processes that might slow or inhibit the automated ordering and provisioning capabilities of the cloud platform.

§ When new IaaS VMs are brought online, for example, configure the cloud management platform to automatically enter an entry into the organizations change control system as an “automatic approval.” This immediately adds the change to the database and can be used to trigger further notifications to appropriate operational staff or trigger automatic security or inventory scanning tools.

§ Utilize preapproved VM templates, applications, and network configurations for all automatically provisioned cloud services. Avoid manual change control processes and approvals in the cloud ordering process.

§ Remember to record all VMs, OS, and application patching, updates and restores in the change control database. Finally, also remember that the change control and inventory databases should also be immediately updated when a cloud service is stopped or a subscription is canceled.