Implementing the Private Cloud and SCVMM - Mastering Hyper-V 2012 R2 with System Center and Windows Azure (2014)

Mastering Hyper-V 2012 R2 with System Center and Windows Azure (2014)

Chapter 9. Implementing the Private Cloud and SCVMM

So far this book has covered aspects of Hyper-V, such as types of resources, high availability, and management. This chapter takes the capabilities enabled through Hyper-V and shows how to build on them by using the System Center management stack and by leveraging the virtual infrastructure.

In this chapter, you will learn to

·     Explain the difference between virtualization and the private cloud

·     Describe the must-have components to create a Microsoft private cloud

The Benefits of the Private Cloud

What is the private cloud? Understanding this is actually the hardest part of implementing the private cloud. One of my customers once said the following, and it's 100 percent accurate:

If you ask five people for a definition of the private cloud, you will get seven different answers.

Very smart customer in 2011

I like to think of the private cloud as having the following attributes:

·     Scalable and elastic, meaning it can grow and shrink as the load on the application changes

·     Better utilization of resources

·     Agnostic of the underlying fabric

·     Accountable, which can also mean chargeable

·     Self-service capable

·     All about the application

Let me explain this list in more detail. First, the all about the application attribute. In a physical setup, each server has a single operating system instance, which, as I've explored, means lots of wasted resources and money. The shift to virtualization takes these operating system instances and consolidates them to a smaller number of physical servers by running each operating system instance in a virtual machine. Virtualization saves hardware and money but doesn't actually change the way IT is managed. Administrators still log on to the operating system instances at the console and still manage the same number of operating system instances. In fact, now administrators also have to manage the virtualization solution. While you may not log on to the actual console of a server, you are still remoting directly into the operating system to do management, and this is basically managing at the console level. The private cloud shifts the focus on the actual service being delivered and the applications used in that service offering. The private cloud infrastructure is responsible for creating the virtual machines and operating system instances that are required to deploy a service, removing that burden from the administrator.

Think back to the service templates covered in Chapter 6. Service templates in System Center Virtual Machine Manager allow the design of multitiered services, with each tier having the ability to use different virtual machine templates and different applications and configurations. Service templates allow administrators (and users, as I will explore later) to easily deploy complete instances of services without any concern for the actual virtual machine configuration or placement. Those service templates also integrate with network hardware such as load balancers, enabling automatic configuration of the network hardware when services are deployed that require hardware load balancing.

Initial deployment is fine, but what about maintenance, patching, and scaling? I'll cover other components of System Center 2012, such as Configuration Manager, which can simplify automated patching of both server and desktop operating systems, but you can still use service templates. Unlike a normal virtual machine template, which loses any relationship with a virtual machine deployed from the template, any instances of a service deployed from a service template maintain the relationship to the template.

Think about being scalable and elastic. Those same service templates allow a minimum, maximum, and initial instance count of each tier of service. Let's look at the web tier as an example. I could configure the tier to have a minimum instance count of 2, a maximum of 20, and an initial of 4. This means when load increases, the user can just access the tool and scale out the tier to a higher number, such as 10, and the back-end infrastructure automatically takes care of creating the new virtual machines, setting up any configuration, and adding the new instances to the load balancer and any other associated actions. When the load dies down, the user can scale in that service, and once again the back-end infrastructure will automatically delete some virtual machines that make up that tier to the new target number and update the hardware load balancer. I'm focusing on the user performing the scale-out and scale-in, but that same private cloud could have monitoring in place, such as with System Center Operations Manager; when load hits a certain point, it runs some automated process using System Center Orchestrator that actually talks to System Center Virtual Machine Manager to perform the scaling. That's why when I talk about the private cloud and focus on the application, it's not just about System Center Virtual Machine Manager; the entire System Center product plays a part in the complete private cloud solution. This scalability and elasticity—meaning having access to resources when needed but not using them and allowing other services to leverage them when not needed—are key traits of the private cloud. Many organizations will charge business units for the amount of computer resources that are used by their applications, which is why the ability to scale is important. By running many different services on a single infrastructure, you will see high utilization of available resources, getting more bang for the infrastructure buck.

1.  Agnostic of the Underlying Fabric This can be confusing. For example, say I want to offer services to my customers, which could be my IT department, business units in the organization, or individual users. To those customers I want to provide a menu of offerings, known as a service catalog in ITIL terms. When those customers deploy a virtual machine or service, they should not need to know what IP address should be given to the virtual machine or virtual machines if deploying a single or multitiered service. The customer should not have to say which storage area network to use and which LUN should be used. Imagine I have multiple datacenters and multiple types of network and multiple hypervisors. If I want to allow non-IT people to deploy virtual machines and services, I need to abstract all that underlying fabric infrastructure from them. The user needs to be able to say (or request through a self-service interface), “I want an instance of this service in Datacenter A and B, and it should connect to the development and backup networks on a silver tier of storage.” Behind the scenes, the private cloud infrastructure works out that for the development network in Datacenter A, the network adapter needs an IP address in a certain subnet connected to a specific VLAN and some other subnet and VLAN in Datacenter B. The infrastructure works out that silver-tier storage in Datacenter A means using the NetApp SAN and only certain LUNs, while in Datacenter B the EMC SAN is used with other specific LUNs. The user gets the service and connectivity they need with zero knowledge of the actual infrastructure, which is exactly as it should be.

Self-service by the user for the provisioning of these services is a great way to think of the difference between virtualization and the private cloud. Let me walk through the most basic case: creating a new virtual machine for a user. Provisioning virtual machines in a virtual world goes like this (Figure 9.1):

1.  The user makes a request to the IT department. This could be a phone call, an email, or some help-desk request.

2.  The IT department gets the request and may do some validation such as checking with their manager if it's approved.

3.  IT will then launch their virtualization management tool and create a virtual machine from a template.

4.  IT will then contact the user giving them the IP address of the VM.


Figure 9.1 Traditional process for requesting virtual machines that is hands-on for the administrator

This sounds fast, but in reality this process ranges from a few days to six weeks in some companies I've worked with. It's a manual process, IT teams are busy, they just don't like the particular business user, or there could be “solar activity disrupting electronics” (which is the same as not liking the user). Whatever the case, because it's a manual process, it takes time and is often fairly low on the priority list.

It can also be fairly hard to track the allocation of virtual machines, which means often there is no ability to charge business units for the requested virtual machines. This can lead to virtual machine sprawl because to the business the virtual machines are free.

In the private cloud, this changes to the process shown in Figure 9.2. The resources used are the same, but the order of the steps and method has changed.

1.  The IT team use their management tool to carve out clouds of resources, which include compute, storage, and network resources, and assign clouds to users or groups of users with certain quotas. This is all done before any users request resources and is the only time the IT team has to do any work, freeing them up to spend their time on more forward-looking endeavors.

2.  The user accesses a self-service portal and fills out a basic request letting them select the type of VM or application and the cloud they want to create it in based on their allocations and quotas.

3.  The private cloud infrastructure takes the request and automatically provisions the VM, which could include workflows to request authorization from management if required. The user would see the details of their new VM in the self-service portal and could even get an automated email giving them details.

4.  The user is happy, and if they had a red button that said “That was easy,” they would be pressing it.


Figure 9.2 Provisioning process when using private cloud

The number-one fear of many IT departments I talk to about the private cloud is that the ability for users and business units to self-serve themselves to virtual machines will result in millions of virtual machines being created for no good reason, plunging the IT infrastructure into a dark age of VM sprawl beyond any previously envisioned nightmare scenario. But that is simply not the case.

Remember what you are doing. First, you are creating clouds of resources. You are defining what these clouds can access in terms of which virtualization hosts and, on those virtualization hosts, how much memory, virtual CPU, and disk IOPS can be consumed. You are setting which tiers of storage that cloud can access and how much space. You are setting which networks that cloud can connect to. You are setting which VM templates can be used by the users to create the virtual machines. For each user or group of users, you set the quotas of how many virtual machines they can create or how much memory and virtual CPUs they can use in each cloud. You can even set what the virtual machines can look like in terms of CPU and memory allocations. With a private cloud solution, you can set charge-back and show-back capabilities, so if a business unit creates a large amount of virtual resource, they get charged accordingly, so the solution is fully accountable. You can set expiry of virtual machines so they are automatically deleted after a period of time. Users can create only on the resources you have defined and within the limits you have configured. If they have a limit of five virtual machines and want to create a sixth, they would have to either delete a virtual machine, export a virtual machine to a library that you have granted them, or request an extension of their quota and go through an approval process.

I think you will find this is more controlled and enforceable than any manual process you may have today. Users request a VM today, and you give it to them; it just takes you weeks, which may discourage business units from asking for virtual resources unless they really need them. That's a terrible way to control resources—by making it painful. Business users will just go elsewhere for their services such as setting up their own infrastructures or using public cloud services, which I've seen happen at a lot of organizations. It's far better to get good processes in place and enable the business so they can function in the most optimal way and use internal services where it makes sense. Remember with the private cloud, you can configure costs for virtual resources and charge the business, so if more virtual resources are required because the business units can now provision resources more easily, the IT department has the ability to gain the funding to procure more IT infrastructure as needed.

You can start slow. Maybe you use the private cloud for development and test environments first, get used to the idea, and get the users used to working within quotas. The private cloud infrastructure can be used in production, but perhaps it's the IT department using the private cloud and maybe even the self-service portal initially and then over time you turn it over to actual end users.

Private Cloud Components

The difference between virtualization and the private cloud is simply the management infrastructure. The same compute, network, and storage resources used for a virtualization infrastructure can be used for a private cloud. To turn virtualization into a private cloud solution, you need the right management stack. For a Microsoft private cloud, this is System Center 2012 R2 added to the virtualization foundation provided by Hyper-V 2012 R2. I provided a brief overview of System Center in Chapter 1; however, here I will cover the components that are critical for a private cloud and why they are critical.

Many of the benefits of virtualization are related to the abstraction of resources, scalability, and controlled self-service. All of these benefits primarily come through SCVMM, so I will cover some of these.

Consider a typical virtualization administrator who has full administrative rights over the virtualization hosts and the compute resource but no insight into the storage and network. This leads to many problems and challenges for the virtualization administrators:

·     “I have no visibility into what is going on at a storage level. I would like to have insight into the storage area networks that store my virtual machines from SCVMM.”

·     “Deploying server applications requires following a 100+ page procedure, which has human error possibilities and differences in implementation between development, test, and production. I want to be able to install the server application once and then just move it between environments modifying only changes in configuration.”

·     “My organization has many datacenters with different network details, but I don't want to have to change the way I deploy virtual machines based on where they are being deployed. The management tool should understand my different networks, such as production, backup, and DMZ, and set the correct IP details and use the right NICs in the hosts as needed.”

·     “I need to save power, so in quiet times I want to consolidate my virtual machines on a smaller number of hosts and power down unnecessary servers.”

While the name System Center Virtual Machine Manager might make it seem like it's focused on the virtual machine management, some of its most powerful features relate not to virtualization but to the storage and network fabric, as discussed in this book. SCVMM integrates with the storage in your environment using SMI-S to give insight into the storage but also classifies and assigns storage to hosts as required. SCVMM allows the network to be designed in SCVMM, providing easy network assignment for the virtual workloads, including providing network virtualization with connectivity to nonvirtualized networks with 2012 R2. All of this insight into the compute, storage, and network resources is completely abstracted for the end user, enabling simple provisioning. Because all the resources are exposed and managed centrally, this leads to a greater utilization of resources, which is a key goal of the private cloud.

Once all the types of resources are centrally managed and abstracted, a key piece to the private cloud is actually creating clouds that can be consumed by different groups within an organization, which could be business units, teams of developers, or different parts of IT, or they could even be used by the same group of administrators for provisioning. But using separate clouds for different uses/groups provides a simpler tracking of resources. A cloud typically consists of a number of key resources and configurations that include the following:

·     The capacity of the cloud, such as how much memory, processor, and storage resources can be used by the cloud and which hosts are used

·     The classifications of storage exposed to the cloud

·     Which networks can be used by the cloud

·     The capabilities exposed to the cloud such as maximum number of vCPUs per VM

·     Library assets available to the cloud such as templates

·     Writable libraries for the purposing of storing virtual machines by cloud users

Notice that the cloud has specific capacity assigned rather than exposing the full capacity of the underlying resources. This means there could be a single set of hosts and storage that could be used by many different clouds. Once clouds are created, the cloud is assigned to different groups of users, or tenants, and those specific tenants have their own quotas within the capacity of the cloud. Individual users in the tenant group have their own subset of quota if required, giving very high levels of granularity and control over the consumption of resources in the cloud. Like clouds and underlying resources, many different tenants can be created for a single cloud. Clouds and tenants are defined and enabled through SCVMM. SCVMM also enables visibility into the usage of current cloud capacity and features role-based access control, which means end users could be given the SCVMM console to use for the creation of virtual machines because they would only see options related to their assigned actions; however, this is not really a good interface for end users to consume.

To provide the self-service capability commonly associated with the private cloud, you use the System Center App Controller component. App Controller provides a web-based front end to expose the clouds defined in SCVMM to the user and also to integrate with Windows Azure–based clouds and even hosting partners that leverage the Service Provider Framework (SPF). App Controller provides the ability to create virtual machines and also basic management such as snapshot creation and stop and start actions, plus a way to connect to the virtual machine consoles. App Controller also provides a bridge between on-premises private clouds and Windows Azure by allowing virtual machines to be migrated from on-premises to Azure.

While App Controller surfaces the clouds defined in SCVMM to the end user and allows self-service within the quotas defined as part of the cloud capacity and the tenant quotas, there is no concept of a provisioning workflow nor approval of requests. SCVMM and App Controller also lack detailed reporting on resource usage and the ability to charge business units based on resource consumption. You can use the Orchestrator and Service Manager components to leverage these features, as I will explain later in this chapter.

What does all this mean? For the key features of the private cloud such as scalability, utilization of resources, abstraction of the fabric, and self-service, the key components required for a Microsoft solution are Hyper-V, SCVMM, and App Controller. Other components such as Orchestrator and Service Manager bring additional functionality such as workflow, approval, and chargeback but are not required for a core private cloud solution. In this chapter, I will focus on SCVMM and App Controller as the foundation for the private cloud before briefly covering the additional features from the rest of System Center. Note that SCVMM no longer has its own self-service portal in the 2012 R2 version; App Controller replaces it.

SCVMM Fundamentals

There are many sayings in the world; two that apply to Hyper-V and SCVMM especially are “A poor workman blames his tools” and “Behind every great man is a great woman” (or maybe that's a song lyric). With virtualization and Hyper-V, the tools that are used to manage and interact with Hyper-V are critical to a successful virtualization endeavor. I would not blame an administrator for blaming an inefficiently run Hyper-V implementation on his tools if all the administrator had access to was the Hyper-V management tool that is supplied in the box, “A poor Hyper-V admin blames Hyper-V manager, and so he should.” For effective and comprehensive management of a Hyper-V environment and a heterogeneous virtualization environment including ESX and XenServer, System Center Virtual Machine Manager is a necessity. “Behind every great Hyper-V implementation is a great SCVMM.”


SCVMM 2012 R2, like the rest of the System Center 2012 R2 components, supports installation only on Windows Server 2012 or Windows Server 2012 R2, which requires a 64-bit server. The other software requirements are fairly minimal, and the only requirement you will have to manually install is the Windows Assessment and Deployment Kit (WADK), which includes components to create and manage operating system images that are required for SCVMM's bare-metal deployment features. You can download the correct WADK version from The other requirements are actually part of the Windows Server 2012 R2 operating system such as Microsoft .NET Framework 4.5 and are automatically installed by the SCVMM installation process. SCVMM must be installed on a server that is part of an Active Directory domain but does not have any strict requirements such as a Windows Server 2008 domain or forest-level mode.

SQL Server 2008 R2 or SQL Server 2012 is required for SCVMM 2012 R2 to store its configuration and data, but this does not need to be installed on the SCVMM server. I recommend having a separate SQL server used for SCVMM and leveraging an existing SQL server farm in your organization that is highly available and maintained by SQL administrators. If you are testing SCVMM in a lab with a small number of hosts, then installing SQL on the SCVMM server is fine, but, where possible, you should leverage an external, dedicated SQL environment.

If you are running an older version of SCVMM 2012, there are specific operating system and SQL server requirements that are documented at the following locations:

·     Operating system requirements:

·     SQL Server requirements:

The actual hardware specifications required will vary based on the number of virtualization hosts being managed by SCVMM. A single SCVMM 2012 R2 server can manage up to 1,000 hosts containing up to 25,000 virtual machines. The Microsoft recommendations actually state that when you have fewer than 150 hosts per SCVMM, you can run the SQL Server on the SCVMM instance. I still prefer to limit the number of SQL instances in my environment, and it's better to invest in that well-architected and maintained highly available SQL farm rather than a local SQL install. Also, if you are planning on implementing a highly available SCVMM installation, then you need SQL Server separate from your SCVMM server. Virtualizing SCVMM 2012 is fully supported and indeed recommended. In my experience, all of the clients I work with virtualize SCVMM.

As with any virtualized service, it is important that you make sure the necessary resources are available to meet your virtualized loads and you don't overcommit resources beyond acceptable performance. Because SCVMM is so important to the management of your virtual environment, I actually like to set the reserve on the vCPUs for my SCVMM to 50 percent to ensure it always can get CPU resources in times of contention. Of course, as you will see, SCVMM should be doing a great job of constantly tweaking your virtual environment to ensure the most optimal performance and ensuring all the virtual machines get the resources they need, moving the virtual machines between hosts if necessary. If you have severely over-committed your environment by putting too many virtual machines on the available resources, performance will suffer, which is why proper discovery and planning are vital to a successful virtual environment.

Dynamic memory is fully supported by SCVMM. My recommendation for the dynamic memory setting for production environments is to set the startup memory to 2,048 and the maximum to 4,096 (the Microsoft minimum and recommended values) for environments managing fewer than 150 hosts. Set startup to 4,096 and maximum to 8,192 for SCVMM instances managing more than 150 hosts. This way, your environment is being efficient in the amount of memory it's using, but it stays within limits supported by Microsoft. You can certainly exceed these maximums if you find memory is low (but that should be unlikely), but I don't recommend you go below the minimum supported for the startup unless perhaps it's in a small lab environment with only a couple of hosts and you are short on memory.

Note that during the installation of SCVMM, the install process performs a check for the minimum amount of memory. If the OS has less than 2000 MB of RAM, the install process will error and refuse to continue. If you are using dynamic memory and set the startup to less than 2048, you will likely hit this problem, so turn off the VM, set the startup memory to 2048, start the VM, and restart the installation. You will get a warning that the memory does not meet the recommended amount of 4000 MB, but this is just a warning and won't stop the installation process. Once the SCVMM install is complete, you can power down the VM, modify the startup value to less than 2048, and continue using SCVMM. It does not check for minimum memory anytime other than install, but remember that using less than 2048 really is not recommended and definitely should not be done in any production environment.

You must specify an account during the installation of SCVMM, which is used to run the actual SCVMM service. During installation you are given the option to either specify a domain account or use Local System. Don't use Local System; while it may seem like the easy option, it limits a number of capabilities of SCVMM such as using shared ISO images with Hyper-V virtual machines, and it can make troubleshooting difficult because all the logs will just show Local System instead of an account dedicated to SCVMM. On the flip side, don't use your domain Administrator account, which has too much power and would have the same problem troubleshooting because you would just see Administrator everywhere. Create a dedicated domain user account just for SCVMM that meets your organization's naming convention, such as svcSCVMM or VMMService. Make that account a local administrator on the SCVMM server by adding the account to the local Administrators group. You can do this with the following command, or you can use the Server Manager tool to navigate to Configuration ⇒ Local Users And Groups ⇒ Groups and add the account to the Administrators group.

C:\ >net localgroup Administrators /add savilltech\svcSCVMM

The command completed successfully.

Do not use a generic domain service account for different applications. This can cause unexpected results and once again makes troubleshooting hard. Use a separate account for each of your services, that is, one for SCVMM, one for System Center Operations Manager (in fact, you need more than one for Operations Manager), another for System Center Configuration Manager, and so on. What do I mean by unexpected results? When SCVMM manages a host, it adds its management account to the local Administrators group of that host, in my case svcSCVMM. If that host is removed from SCVMM management, that account is removed from the local Administrators group of that host. Now imagine you used a shared service account between SCVMM and another application that also needed its service account to be part of the local Administrators group. When you removed the host from SCVMM management, that shared service account would be removed from the local Administrators group on that host, and you just broke that other application.

If you have multiple SCVMM servers in a high availability configuration, the same domain account would be used on all servers, and it's actually a requirement to use a domain account in a SCVMM high availability scenario or if you have a disjointed namespace in your domain. For information on disjointed namespaces, see Ideally, this is not something you have in your environment because it can be a huge pain for many applications.

During the installation of SCVMM, there is an option to specify the storage of the distributed keys that are used for the encryption of data in the SCVMM database. Normally these keys are stored on the local SCVMM computer, but if you are implementing a highly available SCVMM installation, then the keys need to be stored centrally. For SCVMM this means storing in Active Directory. For details on creating the necessary container in Active Directory for the distributed key management, refer to

The actual installation process for SCVMM 2012 is a simple, wizard-driven affair that will guide you through all the required configuration steps, so I won't go into the details here. Just remember to specify your domain account for the service.

SCVMM 2012 supports being installed on a failover cluster now, which means the SCVMM service becomes highly available and can be moved in planned and unplanned scenarios using failover clustering technologies. An external SQL Server should be used to host the SCVMM database, and the installation of SCVMM to a highly available configuration is very simple. Start the SCVMM 2012 installation to an operating system instance that is part of a failover cluster, and the SCVMM install process will detect the presence of the failover clustering feature and prompt if the SCVMM installation should be made highly available. If you answer yes, then you will be prompted for an additional IP address and a name that will be used for the cluster SCVMM service, and that is really the only change in the installation process. A domain account will need to be specified for the VMM service to run as, and Active Directory will be used for storing the encryption keys. You would need to also install SCVMM on all the other nodes in the failover cluster so the SCVMM service can run on all nodes.

SCVMM Management Console

The SCVMM management console looks different from consoles you may be used to because System Center has moved away from the Microsoft Management Console (MMC) standard in favor of a new workspace-based layout. This does not have an official name, but I like System Center Console Framework. Figure 9.3 shows the new console for SCVMM and is broken down into five main elements (also known as panes):

·     Ribbon: The ribbon has become a standard in most Microsoft applications, and you will quickly come to appreciate the dynamically changing ribbon showing the actions available for the selected object that highlights the actions that are most popular based on a lot of research by the SCVMM team.

·     Workspaces: The entire console is workspace based, and in the workspace area you select the workspace you want to work in, which is reflected in all other areas of the console. The workspace area shows the five available standard workspaces: VMs And Services, Fabric, Library, Jobs, and Settings. You will also hear workspaces unofficially referred to as wunderbars. After the initial configuration of SCVMM, you will not use Settings much, but the other workspaces will be used as you enhance your environment.

·     Navigation: This shows the areas of management available in the current workspace.

·     Results: Based on the current navigation node selected, this area will show all the results for that area. Note the Results pane will also be affected by elements selected in the ribbon, which can control what is shown in the Results pane based on the current workspace and Navigation area.

·     Details: The Details pane is not always shown but, when available, will show detailed information on the currently selected object in the Results pane.


Figure 9.3 All elements of the SCVMM console change based on the current workspace and selected element of the workspace.

The best way to learn the SCVMM console is to fire it up and just look around. Explore all the workspaces, select the different nodes in the Navigation pane, and pay attention to the ribbon, which will change and show some interesting options that you will want to play with.

The MMC was great for its original concept of a standardized interface that could actually allow different snap-ins to be placed and organized in a single console, but there were restrictions, particularly around role-based access control (RBAC), which is a key tenant of System Center 2012 and newer. I'm talking about System Center here instead of SCVMM because the focus on RBAC is common for all of System Center and not just SCVMM. As System Center is used more broadly across an organization, it's likely different groups of users will be given access to only certain functionality areas of System Center 2012 components and within those functionality areas only be able to perform actions on a subset of all the objects. In the past, while delegating different permissions was possible, the people delegating rights would still see all the elements of the administrative console and would get “Access Denied” messages. With the new System Center model and RBAC, delegated users see only the areas of the console they have rights to and only the objects they are allowed to work with. A great example in SCVMM would be granting delegated rights to a group of users for only a specific collection of virtualization hosts. As Figure 9.4 shows, full administrators see the entire host hierarchy and all the available clouds on the left side, while application administrators (a self-service user) for the Lab Cloud servers cannot see any of the clouds nor do they have any knowledge of other host groups except for Lab Cloud, which they have been assigned to. By showing application administrators only console elements and objects that they have rights to, it makes the console easier to use, makes it more intuitive, avoids the “why don't I have access to x, y, and z?” questions, and makes the administrative tool usable by normal users such as self-service users. Notice the delegated user also has no view of the fabric workspace at all, and the other workspaces have information limited to their specific cloud.


Figure 9.4 The view on the left shows a normal SCVMM administrator, while on the right you see the view for a Lab Cloud host group delegated administrator.

In SCVMM user roles are created and assigned in the Settings workspace in the Security ⇒ User Roles navigation area. By default, user roles exist for administrators and self-service users, but additional roles can be defined. Other profiles beyond application administrators include fabric administrators (delegated administrators), who have administrative rights on objects within their assigned scope; read-only administrators, which is great for users like the help desk who need to see everything but should not be able to change anything and can be scoped to specific host groups and clouds; and additional self-service roles, which can be scoped to different clouds and can have different actions available to them and different quotas such as Tenant Administrator. The adoption of the new interface also enables the ribbon, which really does help interact with System Center.

The SCVMM 2012 R2 console can be installed on any Windows 7 SP1 x86 or x64 Professional or newer operating system in addition to Windows Server 2008 R2 SP1 servers. By installing the SCVMM 2012 console on machines, you can remotely manage SCVMM and avoid having to log on to the SCVMM 2012 server. I like to install all the various management consoles from Windows and System Center on a remote desktop session host and then publish the administrator tools using the RDP protocol. I can then get to the admin tools from any device and operating system. I walk through the process in the video at

Once you start using SCVMM for managing your Hyper-V environments, you should not use Hyper-V Manager or the Failover Cluster Management tool for normal virtualization resource management. If you do make changes using Hyper-V Manager directly, then SCVMM may not be aware of the change, and it can take some time for SCVMM to detect it, giving inconsistent views between Hyper-V Manager and SCVMM. For best results, once SCVMM is implemented and managing virtualization resources, don't use other management tools to manage the same resources.

Unlocking All Possibilities with PowerShell

When you look at any resource on System Center 2012 and newer or on Windows Server 2012 or newer, one common theme will be the prevalence of PowerShell. Everything that is done in the System Center consoles is actually performed by an underlying PowerShell cmdlet. As you make a selection in a console and click an action behind the scenes, the console composes the correct PowerShell command and executes it behind the scenes. There are actually many actions you can take only with PowerShell, and as you become more acquainted with System Center, you will start to use PowerShell more and more. I'm not talking about manually running actions, but when you consider you can perform every management action for the entire System Center 2012 product using PowerShell, the automation possibilities can truly be realized, and you will start to automate more and more processes.

A great way to get started with PowerShell is to use the graphical interface to perform an action, such as creating a new virtual machine, and in the Summary stage of the wizard you will see a View Script button at the bottom-right corner. Click the View Script button, and you will be shown all the PowerShell commands the console is going to run to perform the actions selected. You can now take all these commands and add them into your own scripts or automation processes.


Throughout this book I talk about many aspects of SCVMM, but I wanted to spend some time on the concept of libraries in SCVMM because they are critical when thinking about many activities. While it would be possible to store all these resources in various locations, the best way is to utilize the SCVMM library feature, which allows one or more file shares to be used by SCVMM as a central repository for assets that can be used in the virtualization management. Typical assets placed in the library include the following:

·     Virtual machine templates, which include the virtual machine hardware configuration, OS configuration information such as domain membership, the product key, and other configuration options for enabling the fast creation of new virtual machines.

·     Virtual hard disks, which will primarily be VHD for Hyper-V virtual machines (and XenServer) but can also store VMDK for ESX. VHD files can also be used to deploy physical Hyper-V servers.

·     Virtual machines that are not in use. This allows saving disk space on the virtualization hosts or shared storage for unused machines and storing in the SCVMM library. You can then deploy them again if needed. For end users, this saves their VM quota!

·     ISO files, which are images of CDs and DVDs that can be attached to virtual machines to install operating systems or applications.

·     Drivers.

·     Service templates, which describe multitiered services.

·     Various types of profiles such as hardware profiles and guest OS profiles that are used as building blocks for creating templates. Host profiles (for physical deployments of Hyper-V servers), capability profiles that describe the capabilities of different hypervisors or environments, SQL Server profiles when installing SQL Server, and application profiles for application deployment. Think of profiles as building blocks for use in other activities within SCVMM.

·     Updated baselines and catalogs.

·     Scripts and commands used for management, which can be grouped together into packages called custom resources (which as previously mentioned are just folders with a .cr extension).

I should be clear that while libraries do have a physical manifestation by storing content on the file shares you specify when you add new library servers, not everything in the library is saved as a file. You will not find virtual machine templates or profiles as files on the file system; instead, templates and profiles are stored as metadata in the SCVMM SQL database.

The file system that corresponds to a location in the library can be accessed by right-clicking a library branch and selecting Explore or by selecting Explore from the ribbon. To add non-virtual-machine-type content such as drivers and ISO files, you would just use the Explore feature and then copy content onto the file system using Windows Explorer. When the library content is refreshed, the new content would be displayed, which can be forced to occur by selecting the library server and selecting the Refresh action on the Library Server ribbon tab. By default library content is automatically refreshed once an hour, but you can change this in the Settings workspace and in the General navigation area select the Library Settings and change the refresh interval per your organization's requirements.

I previously covered the creation of templates, so I'm going to move on to using other types of resources. Templates are one of the primary reasons to use SCVMM. While a single SCVMM library server is added during the installation of SCVMM, additional library servers can be added. It's common to add multiple library servers, particularly so you have a library server in each datacenter where you have virtualization hosts to ensure content that may need to be accessed by the hosts is locally available, and so you do not have to traverse a WAN connection. When you add a library server when the share is selected, check the Add Default Resources box for all the SCVMM default library content to be copied to the share. It is fully supported to host the file share that stores the library content on a highly available file server, which means it's part of a failover cluster and helps ensure the content is available even if a node fails.

To ensure hosts use a library server that is closest to them, library servers can be assigned to host groups by selecting the properties of the library server and setting up the host group, as shown in Figure 9.5. The recommendation is that virtualization hosts should be connected by at least a 100 Mbps link to the library server they use, but ideally 1 Gbps.


Figure 9.5 Specifying the library server for a specific host group

Replicating Library Content Between Multiple Library Servers

SCVMM has no capability to replicate the content of the library servers. If your organization has 20 SCVMM library servers, that means there are 20 file shares all with their own content that you need to keep maintained. If you add a VHD to one library, you need to add it to the other 19 manually.

There are a number of solutions to keep the content replicated, but all involve initially having a single “master” library, which is the library you add new content to and update/remove existing content from. A technology is then used to synchronize this master copy to all the other library servers. One way to replicate the content is to use the Microsoft Robust File Copy tool (Robocopy), which will copy the content from the master to all the other libraries in the organization. Once the copy is complete, a manual refresh of the library would be performed in SCVMM to load in the new content, which can be performed in PowerShell using the Read-SCLibraryShare cmdlet. Another option is to use Distribute File System Replication (DFSR), which allows master-slave relationships to be created and will automatically replicate changes from the master to the slave library shares, but the new content would not show until a library refresh was performed. You cannot use Distributed File System Namespaces (DFSN) as a location for libraries, only the DFSR replication component.

If you have other replication technologies in your organization, that is fine. The two technologies mentioned here are free, Microsoft-provided technologies.

If you have multiple libraries, you will end up with the same content on many different library servers, and your templates will refer to the content on a specific library server such as \\londonlib\SCVMM\VHDs\2008R2.vhd. But you are actually deploying to a server in New York, and there is a library server in New York, \\newyorklib\SCVMM, that has exactly the same file, which you would rather use than copying the content over the North Atlantic. SCVMM allows equivalencies to be created in the library, which as the name suggests allows you to specify that various content from all the different libraries are actually the same object. This means that even though a template may say to deploy \\londonlib\SCVMM\VHDs\2012R2.vhd, because you created an equivalency between the\\londonlib\SCVMM\VHDs\2012R2.vhd and \\newyorklib\SCVMM\VHDs\2012R2.vhd files, if you deployed the template in New York, it would use the VHD from the New York share. This also provides redundancy because if the New York library was not available, then the London library could be used.

To create an equivalency, you select the root Library Servers node in the Navigation pane in the Library workspace. You can then add a filter in the Results pane to show the objects of interest. Select the objects that are the same and then select the Mark Equivalent action; a dialog will open that will ask for a family for the objects and then a release. Both of these values are just text values but are used to help find other objects that match the family and release, so be consistent in your naming. As you type in the values, autocomplete will show existing values, or you can select from the drop-down.

One of the interesting ways library content is used is the use of ISO files, which are files that contain the content of a CD or DVD. To inject a CD/DVD into a virtual machine, you access the properties of a virtual machine by selecting the VM in the Results pane of the VMs And Services workspace and selecting the Properties action. Within the properties of the virtual machine, you select the Hardware Configuration tab, and under Bus Configuration you will find a Virtual DVD Drive. Select Virtual DVD Drive, and notice there is an option for No Media, which means the drive is empty. Physical CD or DVD Drive links it to the physical optical drive in the virtualization host, and Existing ISO Image File allows you to select an ISO file from the library.

Notice an interesting option in Figure 9.6: Share File Instead Of Copying It. A CD/DVD image is normally used to install some software onto an operating system. If the VM accessed the ISO file over the network and that connectivity was lost, it may cause unexpected results because it would appear the media was suddenly ripped out. To avoid this happening by default, when an ISO is attached to a VM drive, the ISO is first copied using BITS over the HTTPS protocol to the Hyper-V host in the virtual machine's folder, and then the VM attaches to the local copied ISO file. This means any network interruption would not stop the access to the ISO. When the ISO is ejected from the VM, the copied ISO file is deleted from the local host. While this does use disk space while the ISO is being used, it gives the safest approach. This same copy approach is used for ESX and XenServer but uses a different file copy technology specific to the virtualization platform. For Hyper-V, only SCVMM gives the option of not copying the ISO to the virtualization host and actually attaches the virtual drive to the ISO on the SCVMM library file share, which is the Share File Instead Of Copying It option. There are some specific configurations required to enable sharing; see


Figure 9.6 Attaching an ISO using SCVMM from the library

The library is one of the key capabilities of SCVMM. All types of resources can be stored in the library, even entire virtual machines, so it's important to architect the right number of library servers, ensuring proximity of a library server to your hosts in all your datacenters.

Creating a Private Cloud Using System Center Virtual Machine Manager

I'm going to assume System Center Virtual Machine Manager is fully configured with all your hypervisor compute resources, such as all the Hyper-V servers that have been placed into failover clusters that have been dynamic- and power-optimized to get the most performant and highly available solution, which minimizes power wastage by turning off hosts when not needed. SCVMM has been connected to your hardware load balancers, all the storage area networks have their SMI-S providers in SCVMM, and the storage has been classified such as gold, silver, and bronze for all locations. Logical networks and sites have been defined. Virtual machine templates for all common configurations have been created, and common services have been modeled as a service template. System Center Operations Manager is performing monitoring of the environment, System Center Data Protection Manager is backing up and protecting the environment, and System Center Configuration Manager is providing patching, desired configuration, and inventory information. Everything is well configured and healthy, so you are ready to create a cloud.

To really understand what goes into creating a cloud in System Center Virtual Machine Manager and all the options, I will walk you through creating a cloud and granting users access to it. This will show all the capabilities and the delegation options for different groups of users.

You use the Virtual Machine Manager Console to create a new cloud, which is achieved through the VMs And Services workspace by selecting the Create Cloud action from the ribbon. The first step is to specify a name and description for the cloud. A good example may be Development for the name and then Cloud for development purposes, with access to the development network only and silver tier storage. Make it something useful.

The next step sets the actual resources that will be included in the cloud, as shown in Figure 9.7. The host groups will govern the computer resources (virtualization hosts) that will be included in the created cloud in addition to the various storage and networks that are available in those host groups. Remember that just because a host group is specified, it does not mean the entire capability or connectivity of that host group is exposed to the cloud. You can specify exactly what you want to give access to later in the wizard. The same host groups can be included in multiple clouds. Also note in the dialog that a VMware resource pool can be selected directly.

The next stage of the wizard allows the selection of which logical networks are available to the cloud. The logical networks displayed will vary depending on the connectivity available to the hosts in the host groups specified in the previous screen. Select the logical networks this cloud should have access to, as shown in Figure 9.8, and click Next to continue with the wizard.


Figure 9.7 Selecting the host group that is available for utilization by the cloud


Figure 9.8 Selecting the logical networks available to the cloud

The hardware load balancers that can be used can be selected on this screen. The hardware load balancers displayed will depend on the host groups selected and the logical networks selected since a hardware load balancer is tied to host groups and logical networks. Once again, click Next to continue, which will allow you to select which virtual IP profiles to make available to the cloud; these are tied to the load balancers selected on the previous screen (do you see the pattern now?). Make the selections and click Next. The various types of port classifications for the cloud are displayed; you should select the ones desired for the cloud and click Next.

The Storage stage of the wizard displays all tiers of storage that are available within the selected host groups. Select the tiers of storage that should be available to the cloud, as shown in Figure 9.9. For a development cloud, as an example, I would select lower tiers of storage such as bronze. Only storage tiers that are available to the selected host groups will be displayed. Click Next.


Figure 9.9 Selecting the storage classifications available to the cloud

The next step is selecting a library configuration. There are two parts to this. The first is the selection of the read-only library shares, and these are standard SCVMM libraries in the environment you want to grant this cloud access to and the contained resources that can be used to create virtual machines and services. You could create libraries with a subset of ISO images to limit what can be created in the clouds. The read-only library needs to be unique and not used as part of a normal library. A stored VM path is also specified, which is an area in which the users of the cloud can actually store content. Why do you want to give the cloud users a writable library area? Consider that users of the cloud will have a certain quota that limits the number of virtual machines they can create. It is possible they may run out of quota and need to create another VM or simply don't need a VM right now but don't want to lose its configuration. When you give the users a place where they can store VMs, the users can save a VM to storage, which removes it from the virtualization host and thus frees up their quota. In the future, the VM could be deployed from the library back to a virtualization host and once again count against their quota. Note that the path specified for the storage of VMs cannot be part of a library location specified as a read-only library. An easy solution is to create a new share on a file server, add it to SCVMM as a library, and then use it as the writable area for a cloud. Once everything has been configured, click Next.

The next stage is configuring the capacity for the cloud (Figure 9.10), and this gets very interesting because there are different approaches an organization can take to managing capacity for the cloud. By default the capacity is unlimited, but I can change any of the dimensions of capacity, such as virtual CPUs, memory, storage, custom quota (which is carried over for compatibility with SCVMM 2008 R2), and virtual machines. I can set the values to use the maximum, a smaller amount, or a higher amount as I did with memory in Figure 9.11. Remember, this is the capacity available to this cloud, so I don't have to expose the full capabilities of the underlying host groups; I may have 10 different clouds on a single set of host groups and want to divide the resources between clouds. But wait a minute; I just set the memory to higher than I have available in the underlying hosts in the selected host groups. How does this work? It's quite acceptable to set the capacity of a cloud to exceed that of the current underlying resources of the cloud. It is just important that the proper resource utilization mechanisms and processes are in place so that as a cloud starts to approach the capacity of the underlying resources, additional resources are added to the host groups. This is where System Center Operations Manager is great for monitoring the usage of resources and can then work with System Center Virtual Machine Manager and System Center Orchestrator to add new Hyper-V hosts and place them into host groups. The same could be done for storage by adding new LUNs to the required storage tiers. Scalability is a key attribute of the private cloud. Set the capacity for the cloud and click Next.


Figure 9.10 Configuring the capacity for the cloud


Figure 9.11 Custom capability profile

The next stage, capability profiles, is an interesting concept. This is different from capacity. Capacity is the limit of what can be stored in the cloud, while capability defines what the virtual machines created in the cloud are, well, capable of. For example, what is the maximum number of virtual CPUs that can be assigned to a virtual machine, and what is the maximum amount of memory for a VM? By default three capability profiles exist—one for Hyper-V, one for ESX, and one for XenServer—which profile the maximum capabilities for each hypervisor platform. For example, the Hyper-V capability profile sets the processor range from 1 to 64 and sets the memory from 8 MB to 1 TB, which are the limits for Hyper-V 2012. The ESX capability profile sets the processor range from 1 to 8 and the memory from 4 MB to 255 GB, which are the ESX 4.1 limits. By default you can select any combination of the three built-in, locked capability profiles for your cloud based on the hypervisors used in the cloud, but you don't have to.

Imagine you are creating a development cloud today. Windows Server 2012 and 2012 R2 Hyper-V are available with their support for virtual machines with 64 virtual CPUs and 1 TB of RAM. I may give a user a quota of 20 virtual CPUs and 24 GB of memory, so do I want that user consuming their whole quota with a single VM? Not likely. Instead, I could create a custom capability profile in the Library workspace and under Profiles ⇒Capability Profiles create a new profile to meet the capabilities I want in a specific cloud. InFigure 9.11, I have created a custom capability profile that has a limit of virtual CPUs and a memory range from 512 MB to 4 GB, and I could also mandate the use of dynamic memory. Note that I can also set the number of DVD drives allowed; if shared images are used, I could set the number of hard disks allowed and their type and size, number of network adapters, and even whether high availability is available or required.

There is a potential pitfall of creating customer capability profiles if you don't plan well. Many resources such as virtual machine templates have a configuration that sets the required capability profile. If you don't update the resources with your custom capability profile, you won't be able to assign any resources to your new cloud. This is configured through the Hardware Configuration area of the VM template; select the Compatibility option and ensure the new capability profile is selected.

Once you've created your custom capability profiles, you can elect to use them for your cloud. The custom capability profiles created can be used in addition to, or instead of, the built-in capability profiles. Click Next.

A summary of all my choices are displayed in a confirmation screen along with the magic View Script button that would show the PowerShell code to create a complete new cloud. This is a basic example without hardware load balancers or virtual IP templates but gives an idea of what is going on. Now that you have the PowerShell code, you could use this in other components like System Center Orchestrator to automate the creation of clouds based on requests from Service Manager.

Set-SCCloudCapacity -JobGroup "XXXXXX" -UseCustomQuotaCountMaximum $true -UseMemoryMBMaximum $false -UseCPUCountMaximum $false -UseStorageGBMaximum $false -UseVMCountMaximum $true -MemoryMB 36864 -CPUCount 40 -StorageGB 1024

$resources = @()

$resources += Get-SCLogicalNetwork -Name "Hyper-V Network Virtualization PA" -ID "c75b66eb-c844-49a2-8bbd-83198fc8ccc0"

$resources += Get-SCLogicalNetwork -Name "Lab" -ID " XXXXXX "

$resources += Get-SCStorageClassification -Name "Gold" -ID " XXXXXX "

$addCapabilityProfiles = @()

$addCapabilityProfiles += Get-SCCapabilityProfile -Name "Hyper-V"

Set-SCCloud -JobGroup " XXXXXX" -RunAsynchronously -AddCloudResource $resources -AddCapabilityProfile $addCapabilityProfiles

$hostGroups = @()

$hostGroups += Get-SCVMHostGroup -ID " XXXXXX "

New-SCCloud -JobGroup " XXXXXX " -VMHostGroup $hostGroups -Name "Test Cloud" -Description "" -RunAsynchronously -DisasterRecoverySupported $false

You now have a cloud that no one can use, so the next step is to assign the cloud to users and groups. To assign access to clouds, you use user roles. These can be either a Delegated Administrator who can do anything to the objects within their scope, a Read-Only Administrator who can view information about everything but can see nothing that is useful for auditors and interns, or a Self-Service user. Each user role has a scope that defines the clouds it applies to and the capabilities and the users/groups within that user role. It is common therefore that you will create a new Self-Service user role and possibly a Delegated Administrator user role for every cloud you create to enable granularity in assigning cloud access.

Open the Settings workspace, navigate to Security ⇒ User Roles, and select the Create User Role action on the ribbon. The Create User Role Wizard will open, and a name and description for the object being created are requested. If the user role is cloud specific, then include the name of the cloud in the name. Click Next and the type of user role is requested; select Self-Service User and click Next.

The next stage prompts for the users and groups that are part of this role. Normally my recommendation would be always use Active Directory groups and add users to the AD group that need access to the user role so it's not necessary to keep modifying the user role. When a user is added to the AD group, the user automatically gets the cloud rights that the AD group has. This works great if you are creating a cloud for a certain business unit and that business unit already has an AD group. Just grant that business unit's AD group access to the cloud-specific Self-Service user role, then as users join the business unit they just get access to the cloud. Even if the cloud is not business-unit specific, but you have good processes in place to add users to groups, you could use an AD group. For example, developers could have access to a development cloud. Where my guidance changes is when there are not good processes in place to add users to groups and it's beyond the control of the team implementing the private cloud to fix it or effect change. In those cases, I may lean to adding users directly into user roles within SCVMM, which can be automated through PowerShell and cut out potentially large delays associated with adding the user to an AD group. Add the users and/or groups and click Next.

On the next page, you select the clouds the user roles apply to. Note there are no host groups shown, only clouds. System Center Virtual Machine Manager is the cloud or nothing. Select the clouds and click Next.

Next you set the quotas for the user role. Remember, when creating the actual cloud, you set the capacity. Now you are setting the quotas for this specific user role in the cloud as well as the quotas of each user within the user role. Note that you may have multiple Self-Service user roles for a single cloud with different quotas and different actions available. In Figure 9.12 I have set an unlimited quota for the role, giving it full access to the cloud, but each user has far smaller limits. Make your configurations and click Next.


Figure 9.12 Setting the quotas for a specific tenant

The next step is adding the resources that should be available to the role such as virtual machine templates, hardware profiles, service templates, and so on. These resources are what will be available when the users of this role create virtual machines, so ensure the right templates are available for them. Additionally, you can specify a location for user role data that is shared between all members of the role. Click Next to continue.

You can now configure permitted actions for the user role, which are fairly granular and fully documented at Once the actions have been configured, the next step allows any specific Run As accounts to be made available to the user role. Take caution in what Run As accounts are made available because you don't want to give basic users access to highly privileged Run As accounts. Click Next, and a summary will be shown; then the role will be created. Once again, you can export the PowerShell code used.

Granting Users Access to the Private Cloud with App Controller

I've talked about installing the SCVMM management console for end-user access, but this is really not practical. Instead, the preferred interface for users is via a web browser, which is primarily achieved using System Center App Controller.

System Center App Controller is a thin installation. It can be installed on the System Center Virtual Machine Management server, but for busy environments, you will gain better performance by running App Controller on its own server. App Controller uses a SQL database for its configuration storage, but the rest of its requirements are light. The install media is around 23 MB at time of writing. What System Center App Controller delivers is a web-based Silverlight experience, which means it's a rich, interactive interface that provides access to both private clouds provided by System Center Virtual Machine Manager and public cloud services such as Windows Azure, as shown in Figure 9.13.


Figure 9.13 App Controller view of clouds

Virtual machines can be managed within SC App Controller, but SC App Controller can do so much more. It can deploy entire services that are based on the service templates that you created in SCVMM; these templates could have many tiers with different applications at each tier level. If you have a Windows Azure application, it can be deployed to your Windows Azure subscription from within App Controller and it can be fully managed and fully scaled instead of having to use the Windows Azure website. Services can be moved between your on- and off-premise clouds, giving you power, flexibility, and portability. App Controller can also hook into hosting partners that leverage the Service Provider Foundation (

App Controller is supported on Windows Server 2008 R2 SP1 and newer, but if you are installing App Controller on the same server as Virtual Machine Manager 2012 R2, you have to use Windows Server 2012 or Windows Server 2012 R2 because VMM 2012 R2 does not support Windows Server 2008 R2. App Controller leverages its own database to store its configuration information, which can be SQL Server 2008 R2 or SQL Server 2012. It is common to deploy App Controller on the same server as Virtual Machine Manager because they work so closely together; however, in large environments that require greater scalability, App Controller can be deployed on its own operating system instance. App Controller can be made highly available and/or load balanced using a number of mechanisms. Make sure the SQL database is part of a highly available SQL cluster, and then for the App Controller server either make the App Controller virtual machine highly available or deploy multiple App Controller instances that use the same SQL database and share a common encryption key.

Installation and Initial Configuration

The installation of App Controller is simple because most of its actual configuration comes from Virtual Machine Manager in terms of users and resources that are available. This means before deploying App Controller, it's important you have deployed Virtual Machine Manager 2012 R2 and have created clouds of resources and defined tenants (which are groups of users who have various rights and quotas for those clouds). This is the fundamental building block to enable App Controller to integrate with on-premises virtual environments. It's also necessary to have created virtual machine templates and optionally service templates because these are how users will deploy environments through App Controller. This also means your App Controller administrators must also be virtual machine manager administrators. The account you will use to install App Controller must be a member of the local Administrators group on the server and will become the first App Controller administrator.

The actual installation of App Controller is documented at and is intuitive because there are so few installation options. Provided you have the SQL Server deployment available to be used by App Controller and your server meets the requirements, the only change you may want to make to the standard installation is using a trusted SSL certification instead of a self-signed certificate generated by the install process. This means your users won't get warnings that the certificate is not trusted. I definitely recommend against using the self-signed certificate unless App Controller is being used only in a limited test environment.

The only configuration you must perform is to connect App Controller to the Virtual Machine Manager instance, which is performed by logging on to the App Controller website, https://<App Controller Server>, and then selecting the Settings navigation node and the Connections child navigation option. Select the Connect ⇒ SCVMM action, as shown in Figure 9.14, which will launch the Add A New VMM Connection screen. Enter a name for the new VMM connection, a description, and then the actual VMM server name (remember, you should be logged on as an account that is both an App Controller and a VMM administrator). You will notice a Windows Azure connection exists by default and is what allows the addition of Azure subscriptions to App Controller. Also notice the option to add a service provider, which could be a company you use that offers virtual machine hosting and has implemented the Microsoft Service Provider Framework, which allows it to be managed via App Controller. Think of the SPF as exposing the hoster's own VMM deployment to your on-premises App Controller. It's necessary for the hoster to give you the URI for your tenant ID in their environment to complete the connection from App Controller.

If your organization leverages Windows Azure, then those subscriptions should be added to App Controller via the Clouds navigation view and the Connect ⇒ Windows Azure Subscription action (or Settings ⇒ Connections ⇒Add). To link to Azure, it is necessary to have created a management certificate and imported it into the Azure subscription so it is available for importing into App Controller as part of the process to connect App Controller to Azure. I detail the whole process including how to create the certificate and use it at

Under the Settings navigation node is the User Roles node, where you can configure additional App Controller administrators by adding members to the Administrators built-in user role. If you added Windows Azure subscriptions or connections to SPF hosters, then additional user roles can also be created to control App Controller user access to Azure subscriptions and SPF services since these can't be controlled through your on-premise VMM instance, which manages only the on-premise, private cloud infrastructure.

Consider what you now have with App Controller. You have access to your on-premises private clouds defined and managed in Virtual Machine Manager, access to your Windows Azure subscriptions, and access to your services at hosting partners that leverage SPF. Initially, most organizations will just use App Controller for their on-premises private cloud management, but the ability to grow its reach will really help organizations embrace a hybrid cloud approach.

Some customization of App Controller is possible. It's easy to change the graphics for App Controller. The logos are just two image files in the %PROGRAMFILES%\Microsoft System Center 2012 R2\App Controller\wwwroot folder, called SC2012_WebHeaderLeft_AC.png andSC2012_WebHeaderRight_AC.png. Just back up and replace these images; your replacements must be the same dimensions as the originals, 213×38 and 108×16, respectively, or they won't work.

All App Controller configuration is performed through the website; there is no separate management tool. Because the App Controller web interface is based on Silverlight, the browser used must be a 32-bit browser that supports Silverlight version 5. This means you will need to use Internet Explorer 8 or newer. Other browsers will not work at this time. If you are embracing PowerShell, there is a PowerShell module for App Controller that enables all of the key configurations related to App Controller. Run the following PowerShell command to see all the available cmdlets:

Get-Command –Module AppController

User Interaction with App Controller

App Controller is ready to be used and is functional straightaway once it's been connected to your cloud services. Users navigate to https://<App Controller Server> and log on using their domain credentials in a forms-based interface. If desired, it is possible to enable single sign-on, which will remove the forms-based authentication completely. Once the user is authenticated, they will see whatever clouds have been defined in VMM that they have access to in addition to their Azure and SPF-based services on the Overview screen. If a user is not a tenant (which means they are not part of any role that has rights) of a cloud defined in VMM, then they will not see any private clouds in App Controller. Think of App Controller just as an interface into the VMM configurations for private cloud. The actual granting of rights is all performed through VMM.

The actual App Controller interface is highly intuitive, and users can typically pick up its use easily. Figure 9.15 shows a default view for users when they log in and navigate to the Clouds view. In the example, you can see the user has access to a single on-premises cloud and no Azure subscriptions. For the on-premises cloud, it shows the users' current resource usage and their remaining quota. Any time a user deploys a new VM or service, App Controller shows the quota impact and ensures users cannot exceed quota. Notice that for standard users the Settings navigation node is not shown.


Figure 9.14 Adding a VMM connection to App Controller


Figure 9.15 A cloud view for a cloud tenant

The deployment experience is one of the highlights of App Controller. Once the deploy has been initiated via the Deploy action, the New Deployment screen opens, which by default will have selected the cloud that was selected when the Deploy button was clicked. Initially, the only configuration possible is to select a template, which is a composite view of all the initial options depending on the cloud selected. The templates available will depend on which cloud you are deploying and what templates your user has been given access to. If you deploy to Azure, you get the standard Azure list of templates and any custom ones you may have added. If you deploy to your on-premises private cloud, you see the templates and services available for your user role. Once a virtual machine is selected, configuration is then possible, such as naming the virtual machine and perhaps some customization depending on the template selected. If you have defined service templates (which are scalable, multitiered complete services designed within VMM) and made them available to the tenant, then selecting a service results in a rich view in App Controller, as shown in Figure 9.16. This allows all the VMs for the initial service deployment to be configured and then deployed, but no configuration is actually necessary. Default virtual machine and computer names are autogenerated for all the virtual machines that are deployed as part of the service. I say the initial deployment because the whole point of service templates is that each tier can have a variable number of virtual machines depending on load, so App Controller just deploys the initial number of VMs defined for each tier, but as the service runs, that number may grow and shrink. App Controller actually allows the addition of virtual machines for a deployed service via the Scale Out action when using the diagram view of a deployed service. To scale in a tier of a service, you just manually delete virtual machines; there is no specific “Scale In” action.


Figure 9.16 Deploying a service template with App Controller

Once all customization is complete, click the Deploy button to create a deployment job. If the deployment is to a private cloud, then the job is created in VMM. If to Azure, then the job is created in Azure, and if to an SPF-hosted cloud, then the job is created on the SPF infrastructure. No matter where the target is, the users' deployment experience is the same, and the deployment progress can be seen in their Jobs view, which is available via the navigation node. The actual time of deployment depends entirely on the size and number of virtual machines that make up the deployment and the service being deployed to, but the progress can be tracked in detail using the Jobs view.

Once the services/virtual machines are deployed, they can be viewed using either the Services or Virtual Machines view. For now I will focus on the Virtual Machines view because this will be the primary way App Controller is used by most companies in the near term (although I strongly encourage you to look at service templates in VMM because they provide some amazing capabilities in terms of scalability and manageability). The Virtual Machines view will show all the provisioned virtual machines, the virtual machines stored in the library that the current user owns, and those they have been given access to. Default actions for virtual machines include Startup, Shutdown, Pause, Turn Off, Save, Store, Mount Image, Remote Desktop, and Connect To The Console. The exact options available will depend on the current state of the virtual machine, as shown in Figure 9.17. If the Properties option is select, additional options are available such as configuring access for other users and creating and applying snapshots. Remember that the actions available will depend on those granted to the users. Take some time to look around the App Controller interface, both as an administrator and as a regular user, including looking at the information and options for the Library and Jobs views.


Figure 9.17 Options available for a virtual machine via App Controller

While App Controller provides huge benefit from a single view for all virtualization services used by an organization, App Controller also opens up some great hybrid capabilities. It is possible to deploy virtual machine templates stored on premises in VMM to Windows Azure. It is possible to take a virtual machine that is stored in the VMM library and deploy it to Windows Azure, and this is how you migrate a VM from on-premises Hyper-V to Azure using App Controller. There is no live migration type of functionality possible. You stop the VM running on premises and save it in the library using the Store action before it can be deployed to Azure using its Copy action. Bringing a virtual machine back from Azure to App Controller requires copying the VHDs from Azure into your VMM library and then redeploying. Another option for VM migration between on premises and Azure is to leverage System Center Orchestrator, which gives a lot more flexibility and power to the migrations.

Enabling Workflows and Advanced Private Cloud Concepts Using Service Manager and Orchestrator

System Center Orchestrator provides two primary great capabilities: the ability to communicate with many different systems and the ability to automate defined series of activities that could span many different systems through runbooks. These two abilities can be highly beneficial to your private cloud implementation.

At the most basic level, Orchestrator can be leveraged to actually create virtual machines, deploy service templates, and even create entire clouds through runbooks. In Figure 9.18, I show a really basic runbook that receives some initialization data, makes a call to SCVMM to create a VM, and then runs some PowerShell to configure ownership of the VM.


Figure 9.18 This shows a basic Orchestrator runbook.

This is really just the tip of the iceberg. Running PowerShell scripts to perform actions through Orchestrator is great, and error checking and updating of other applications like Service Manager are benefits. But you can run PowerShell without Orchestrator. If you look again at Figure 9.18, you will see on the right side a list of activity groups, known as integration packs, including System Center Virtual Machine Manager, System Center Configuration Manager, and VMWare vSphere. Each integration pack contains activities specific to the target. For vSphere there are activities for virtual machine creation and management; the same type of activities are available for SCVMM and for Configuration Manager activities including deploying software. Using integration packs for systems, the built-in Orchestrator activities, and PowerShell, it is possible to automate any action related to the private cloud (and anything else) and to customize exactly how your organization functions. Once you create the runbooks, they can then be called by the rest of System Center or triggered automatically. Here are some great scenarios where Orchestrator can be used:

·     Creating a new cloud based on an IT request though a service catalog that calls a created Orchestrator runbook

·     Deploying a new virtual machine or service instance

·     Offering a runbook that automatically patches and reboots all virtual machines for a particular user or business group

·     Automatic scaling up and down of deployed services by triggering runbooks that perform scaling based on performance alerts from Operations Manager

·     Deprovisioning virtual machines or services that have passed a given length of time or date in development purposes

Remember that the point of the private cloud is its automation, and you can't automate by using graphic consoles. Therefore, as you learn System Center and Virtual Machine Manager in particular, look at the series of actions you perform, look at PowerShell, look at the activities in integration packs, and start creating runbooks in Orchestrator that can be used. Once the runbooks are created, they can be manually triggered using the Silverlight web-based Orchestrator interface or triggered from other systems such as an item in Service Manager's service catalog. With Orchestrator being able to connect to almost any system, with a little bit of work any manual process you perform should be able to be automated and more importantly orchestrated with System Center Orchestrator.

I talked about System Center Service Manager a number of times in this chapter and quite a few times in the previous section. Service Manager is the Configuration Management Database (CMDB) of your organization. It has feeds and connections to all the other System Center components and can offer various services such as basic ticketing activities such as incidents (things not doing what they should), problems (something is broken), and change requests (I want something). When there are problems in the environment because Service Manager connects to all the different systems, when you look at a computer in Service Manager, information from all the different systems is visible and gives you a single point of truth about the asset and aiding solutions. In Service Manager you can see all of the hardware, software, and patch status gathered from Configuration Manager. You can see AD information that was pulled from AD. You can see any alerts that were generated by Operations Manager plus any more complex service dependencies such as all the systems that are responsible, from providing messaging services to the organization.

Because of all these connections to systems, Service Manager can provide a great service for your private cloud. So far I've talked about creating clouds, virtual machines, and services with the SCVMM console, App Controller, PowerShell, and Orchestrator. There are problems, though, when you think of end users for all these approaches. Users don't really want a separate interface just for requesting a virtual machine typically. Users are not going to run PowerShell scripts you give them, and giving them a list of runbooks they can run through a web interface is likely to baffle them.

So, Service Manager 2012 introduced a new feature called a service catalog. The service catalog is a single source that can contain all the different types of services offered by the organization. This could include creating a new user, requesting the installation of a software package through SCCM, asking for a new keyboard, or really anything that Service Manager has some ability to enable through its connections to other systems. The service catalog is primarily available to end users through a SharePoint site that uses Service Manager Silverlight web parts. Users can browse the service catalog as a single go-to place for all their needs, which makes it a perfect place to offer virtual services on the private cloud. How do you offer the private cloud services in the service catalog? You just add the runbooks from Orchestrator, and then when a user makes a request from the service catalog, the normal Service Manager workflows can be used such as request authorization. Then the workflow will call the runbook in Orchestrator to actually perform the actions. Service Manager and Orchestrator have great bidirectional communications in System Center, allowing the status of the runbook execution to actually be visible within Service Manager, as shown in Figure 9.19. Once the process is complete, the service request would be marked as completed, and the user could even be sent an email. I walk through creating this type of service using Service Manager in a video at


Figure 9.19 Service catalog view in Service Manager of request offerings that call Orchestrator runbooks

Service Manager also has the ability to create charge-back price sheets that allow prices to be assigned to different aspects of the virtual environment such as price per day for the VM, price per core, memory, and storage per day and then additional items such as a price for a highly available VM or static IP address. These price sheets can then be used within Service Manager to allow charge-back to business units based on their utilization.

How the Rest of System Center Fits into Your Private Cloud Architecture

In this chapter I've touched on a number of components of System Center 2012 R2, such as Virtual Machine Manager, App Controller, Orchestrator, and Service Manager. There are others that, while not a key private cloud building block, are still important to a complete infrastructure. I will touch on them briefly here and finish with a new solution from Microsoft that builds on System Center 2012 R2 to bring a Windows Azure–like experience to your on-premises private clouds.

Fabric management and deployment of services are critical, however, to ensuring the ongoing health of the fabric, the Hyper-V hypervisor, the virtual machines, and the applications running inside the virtual machines monitoring is necessary to ensure the long-term availability and health of the environment.

System Center Operations Manager (SCOM) provides a rich monitoring solution for Microsoft and non-Microsoft operating systems, applications, and hardware. Any monitoring solution can tell you when something is broken. SCOM does that, but its real power is in its proactive nature and best-practice adherence functionality. SCOM management packs are units of knowledge about a specific application or component. For example, there is an Exchange Management Pack, and there is a Domain Name System (DNS) for Windows Server management pack. The Microsoft mandate is that any Microsoft product should have a management pack that is written by the product team responsible for the application or operating system component. This means all the knowledge of those developers, the people who create best-practice documents, are creating these management packs that you can then just deploy to your environment. Operations Manager will then raise alerts of potential problems or when best practices are not being followed. There are often objections raised from customers that when first implemented Operations Manager floods them with alerts. Well, this could be for a number of reasons; perhaps there are a lot of problems in the environment that should be fixed, but often Operations Manager will be tuned to ignore configurations that while perhaps not best practice are accepted by the organization.

Many third parties provide management packs for their applications and hardware devices. When I think about “all about the application” as a key tenant of the private cloud, the ability for Operations Manager to monitor from the hardware, storage, network, and everything all the way through the OS to the application is huge, but it actually goes even further in Operations Manager 2012.

System Center Operations Manager 2012 introduced a number of changes, but two huge ones were around network monitoring and custom application monitoring. First, Microsoft licensed technology from EMC called SMARTS that enables a rich discovery and monitoring of network devices. With the network discovery and monitoring functionality, Operations Manager can identify the relationship between network devices and services to actually understand that port 3 on this switch connects to server A, so that if there is a switch problem, Operations Manager will know the affected servers. Information such as CPU and memory information among other information is available for supported network devices.

The other big change was the acquisition of AVIcode by Microsoft, which is now Application Platform Monitoring (APM) in Operations Manager 2012. APM provides monitoring of custom applications without any changes needed by the application. APM currently supports .NET applications and Java Enterprise Edition (J2E), and a great example to understand this is to look at a custom web application in your environment today without APM when performance problems occur.

User phones IT: “Application X is running slow and sucks.”

IT phones the app developer: “Users say Application X is running really slow and really sucks.”

App developer to self: “I suck and have no clue how to start troubleshooting this. I will leave the industry in disgrace.”

With System Center Operations Manager's Application Platform Monitoring configured to monitor this custom application, it changes.

User phones IT: “Application X is running slow and sucks.”

IT phones the app developer: “Users say Application X is running really slow. I see in Operations Manager the APM shows that in function X of module Y this SQL query ‘select blah from blah blah’ to SQL database Z is taking 3.5 seconds.”

App developer to self: “It must be an indexing problem on the SQL server and the index needs to be rebuilt on database Z. I'll give the SQL DBA the details to fix.”

App developer to SQL DBA: “Your SQL database sucks.”

Operations Manager can be used in many aspects of the private cloud. While it's great that it monitors the entire infrastructure to keep it healthy, the Operations Manager ability to monitor resource usage and trending helps plan growth and can trigger automatic scaling of services if resources hit certain defined thresholds. Figure 9.20 shows an example view through Operations Manager of a complete distributed service that is comprised of many different elements. To prove the flexibility of Operations Manager in this example, I'm actually monitoring an ESX host through information gained through SCVMM and also my NetApp SAN, some processes running on Linux, and a SQL database. All of those different elements make up my complete application to show an overall health rollup, but I can drill down into the details as needed.


Figure 9.20 A view of a distributed service and its various services visible through Operations Manager

Operations Manager 2012 R2 also understands clouds and has a cloud view capability. Once the SCVMM MP has been imported into Operations Manager and the SCVMM connector to Operations Manager has been configured, you will be able to navigate to Microsoft System Center Virtual Machine Manager ⇒ Cloud Health Dashboard ⇒ Cloud Health within the Monitoring workspace. This will list all the clouds. Select a cloud and in the Tasks pane select the Fabric Health Dashboard, which, as shown in Figure 9.21, gives some insight into all the fabric elements that relate to the cloud.


Figure 9.21 The Fabric Health Dashboard for a SCVMM cloud

With the environment being monitored, a key aspect is the backup that I talked about in Chapter 6, namely, Data Protection Manager. As discussed previously, DPM is a powerful solution for the backup and restore of not just Hyper-V but also key Microsoft applications such as Exchange, SQL, and SharePoint. Although it has limited Linux VM backup on Windows Server 2012 R2 Hyper-V, DPM is still very much a Microsoft-focused protection solution.

The final component of System Center is Configuration Manager. SCCM provides capabilities to deploy operating systems, applications, and OS/software updates to servers and desktops. Detailed hardware and software inventory and asset intelligence features are key aspects of SCCM, enabling great insight into the entire organization's IT infrastructure. SCCM 2012 introduced management of mobile devices such as iOS and Android through ActiveSync integration with Exchange and a user-focused management model. One key feature of SCCM for servers is settings management, which allows a configuration of desired configuration to be defined such as OS and application settings and then applied to a group of servers (or desktops). This can be useful for compliance requirements. The challenge I face in recommended SCCM for servers today is that its focus seems to be shifting for SCCM to be mainly desktop. The benefits SCCM can bring to servers such as patching, host deployment, and desired configuration are actually better handled through other mechanisms. For patch management both SCVMM and Failover Clustering one-click patching leverage WSUS and not SCCM. For host deployment, SCVMM has the ability to deploy physical servers for Hyper-V and for file servers and automatically manage cluster membership and more. Desired configuration is possible through PowerShell v4's Desired State Configuration feature. Therefore, if you are already using SCCM, you can take advantage of some of those capabilities in your environment. I would not implement SCCM for the sole purpose of server management; there are better options in my opinion in the other components and base operating systems.

So far, I have talked about many different components and how they can be brought together to provide a private cloud solution. However, a fair amount of work is involved in producing a full private cloud offering. For System Center 2012 Microsoft has a solution accelerator, the System Center Cloud Service Process Pack (, that has guidance and add-ons for System Center 2012 to create a fully functioning private cloud. This is no longer provided for System Center 2012 R2. Instead, for System Center 2012 R2, Microsoft introduces the Windows Azure Pack (WAP), which builds on System Center 2012 R2, SPF, and a new component in System Center 2012 R2, Orchestrator Service Management Automation (SMA), to bring a Windows Azure–like experience to your on-premises private cloud.

You can find information on the Windows Azure Pack at along with detailed deployment guidance. At the time of this writing, WAP brings a number of Windows Azure experiences to your on-premises private cloud such as IAAS, Web PAAS, and database as a service, but there are also other services that can be made available through custom services. Microsoft has already published some WAP gallery items for Exchange, Lync, and SharePoint services at Right now I think of WAP as focused on hosters and organizations that offer a hoster-type experience to their business units/users. In other words, they are very mature in their private cloud adoption and are focused on providing services. I think longer term the Microsoft direction for the end-user interaction will be via WAP because Microsoft will want to consolidate the number of different end-user experiences from having the separate App Controller, Service Manager, and WAP options.

The Bottom Line

1.  Explain the difference between virtualization and the private cloud. Virtualization enables multiple operating system instances to run on a single physical piece of hardware by creating multiple virtual machines that can share the resources of the physical server. This enables greater utilization of a server's resource, reduction in server hardware, and potential improvements to provisioning processes. The private cloud is fundamentally a management solution that builds on virtualization but brings additional capabilities by interfacing with the entire fabric including network and storage to provide a complete abstraction and therefore management of the entire infrastructure. This allows a greater utilization of all available resources, which leads to greater scalability. Because of the abstraction of the actual fabric, it is possible to enable user self-service based on their assignment to various clouds.

1.  Master It Do you need to change your fabric to implement the private cloud?

2.  Describe the must-have components to create a Microsoft private cloud. The foundation of a Microsoft private cloud solution would be virtualization hosts using Hyper-V and then SCVMM and App Controller to provide the core fabric management, abstraction, cloud creation, and end-user self-service functionality. Orchestrator and Service Manager can be utilized to build on this core set of private cloud functionality to bring more advanced workflows, authorization of requests, and charge-back functionality.