Configure and manage high availability - Exam Ref 70-417 Upgrading Your Skills to Windows Server 2012 R2 (2014)

Exam Ref 70-417 Upgrading Your Skills to Windows Server 2012 R2 (2014)

Chapter 10. Configure and manage high availability

The Configure and Manage High Availability domain relates to failover clustering and the live migration of virtual machines.

A failover cluster, as you know, is a group of two or more computers that work together to help ensure the availability of a service or application. (In Windows Server 2012 and Windows Server 2012 R2, clustered services and applications are known as roles.) There are a number of improvements in failover clustering in Windows Server 2012 and Windows Server 2012 R2, beginning with scalability: Failover clusters now support up to 64 nodes (as opposed to 16 in Windows Server 2008 R2). Failover clusters in Windows Server 2012 and Windows Server 2012 R2 also support many new enhancements, such as Cluster-Aware Updating, role priority, VM monitoring, and node drain. Windows Server 2012 R2 specifically, for its part, has brought yet another round of new features and improvements, including Active Directory-detached clusters, Dynamic Witness, and virtual machine network health detection.

Live migration of virtual machines (VMs) used to be a restricted to failover clusters, but this feature has been expanded to provide uninterrupted availability in all domain contexts.

To learn about the new developments in failover clustering and live migration for the 70-417 exam, it’s best (as always) to implement these features in a test environment. For failover clustering, the good news is that you can now perform all of this testing on a single server with VMs running in Hyper-V and using the new built-in iSCSI Target feature for shared storage. For live migration, you will need two physical servers.

Objectives in this chapter:

Image Objective 10.1: Configure failover clustering

Image Objective 10.2: Manage failover clustering roles

Image Objective 10.3: Manage virtual machine (VM) movement

Objective 10.1: Configure failover clustering

Failover clustering in Windows Server 2012 and Windows Server 2012 R2 introduce many improvements. The topics covered here are the ones most likely to appear on the 70-417 exam.


This section covers the following topics:

Image Cluster storage pools

Image Cluster shared volumes (CSVs)

Image Virtual hard disk sharing for guest clusters in Windows Server 2012 R2

Image Dynamic quorum

Image Dynamic witness in Windows Server 2012 R2

Image Node drain

Image Cluster-Aware Updating (CAU)

Image Active Directory-detached clusters in Windows Server 2012 R2


Cluster storage pools

In Windows Server 2012 and Windows Server 2012 R2, you can now draw from data storage provided by a Serial Attached SCSI (SAS) disk array to create one or more storage pools for a failover cluster. These storage pools are similar to the ones you can create for an individual server by using Storage Spaces. As with the Storage Spaces feature, you can use storage pools in a failover cluster as source from which you can then create virtual disks and finally volumes.

To create a new storage pool, in Failover Cluster Manager, navigate to Failover Cluster Manager\[Cluster Name]\Storage\Pools, right-click Pools, and then select New Storage Pool from the shortcut menu, as shown in Figure 10-1. This step starts the same New Storage Pool Wizard used with Storage Spaces. (In fact, if you have a shared SAS disk array, you can use Server Manager to create the pool and use the Add Storage Pool option to add it to the machine.) After you create the pool, you need to create virtual disks from the new pool and virtual volumes from the new disks before you can use the clustered storage space for hosting your clustered workloads.

Image

FIGURE 10-1 Creating a new storage pool for a cluster

The availability of storage pools for failover clusters has implications for the 70-417 exam, especially because these storage pools have a number of requirements and restrictions that could easily serve as the basis for a test question. Note the following requirements for failover cluster storage pools:

Image A minimum of three physical drives, with at least 4 GB capacity each.

Image Only SAS-connected physical disks are allowed. No additional layer of RAID (or any disk subsystem) is supported, whether internal or external.

Image Fixed provisioning only for virtual disks; no thin provisioning.

Image When creating virtual disks from a clustered storage pool, only simple and mirror storage layouts are supported. Parity layouts are not supported.

Image The physical disks used for a clustered pool must be dedicated to that one pool. Boot disks should not be added to a clustered pool.


More Info

For more information about cluster storage pools, visit http://blogs.msdn.com/b/clustering/archive/2012/06/02/10314262.aspx.


Cluster shared volumes (CSVs)

Cluster shared volumes (CSVs) are a new type of storage used only in failover clusters. CSVs first appeared in Windows Server 2008 R2, so if you earned your last certification before this important feature was introduced, you might have missed CSVs completely. In this case, you need to understand the basics about them before taking the 70-417 exam.

The biggest advantage of CSVs is that they can be shared by multiple cluster nodes at a time. This is not normally possible with shared storage. In fact, even different volumes created on the same logical unit number (LUN) cannot normally be shared by different cluster nodes at the same time.

CSVs achieve this shared access of volumes by separating the data from different nodes into virtual hard disk (VHD) files. Within each shared volume, multiple VHDs are stored, each used as the shared storage for a particular role. The CSVs containing these VHDs are then mapped to a common, integrated namespace on all nodes in the cluster. On every failover cluster configured with CSVs, the CSVs appear on every node as subfolders in the \ClusterStorage folder on the system drive. Example pathnames are C:\ClusterStorage\Volume1, C:\ClusterStorage\Volume2, and so on. The volume objects in the path act as links to remote LUNs, so you are not limited by the size of your local drive.

CSVs are formatted with NTFS (or, optionally, ReFS in Windows Server 2012 R2), but to distinguish them from normal NTFS and ReFS volumes, the Windows Server 2012 and Windows Server 2012 R2 interfaces display these volumes as formatted with CSVFS, or the Cluster Shared Volume File System. An example of a CSV is shown in Figure 10-2.

Image

FIGURE 10-2 A cluster shared volume

To create a CSV in Windows Server 2012 or Windows Server 2012 R2, first provision a disk from shared storage, such as from an iSCSI target. Use Server Manager to create a volume from this disk, as shown in Figure 10-3.

Image

FIGURE 10-3 Creating a new volume in Server Manager

Assign the new volume to the desired failover cluster, as shown in Figure 10-4. (The name of the cluster appears as a server name.)

Image

FIGURE 10-4 Assigning a new volume to a cluster

In Failover Cluster Manager, the new volume will appear as a disk. Right-click the disk and select Add To Cluster Shared Volumes from the shortcut menu, as shown in Figure 10-5.

Image

FIGURE 10-5 Adding a volume to cluster shared volumes

In Windows Server 2008 R2, CSVs were used as storage for only one type of workload hosted in a failover cluster: a highly available VM. In Windows Server 2012 and Windows Server 2012 R2, CSVs are now also used as the only storage type for a new role, the Scale-Out File Server, which is described later in this chapter. Another important use for CSVs is with live migration in failover clusters (another feature described later in this chapter). Although CSVs are not required for live migration, they are highly recommended because they optimize the performance of the migration and reduce downtime to almost zero.

How might CSVs appear on the 70-417 exam? If there’s a question about CSVs directly, it could come in the form of a requirement that states you want to “minimize administrative overhead” when designing storage for a highly available VM. More likely, you will simply see CSV mentioned in the setup to a question that isn’t just about CSVs.


Note

Why are Cluster shared volumes useful? Here is the original problem CSVs were designed to solve: In Windows Server 2008 and earlier versions of Windows Server, only one cluster node could access a LUN at any given time. If any application, service, or VM connected to a LUN failed and needed to be moved to another node in the failover cluster, every other clustered application or VM on that LUN would also need to be failed over to a new node and potentially experience some downtime. To avoid this problem, each clustered role was typically connected to its own unique LUN as a way to isolate failures. This strategy created another problem, however: a large number of LUNs that complicated setup and administration.

With CSVs, a single LUN can be accessed by different nodes at a time, as long as the different nodes are accessing distinct VHDs or VHDXs on the LUN. You can run these roles on any node in the failover cluster and when the role fails, it can fail over to any other physical node in the cluster without affecting other roles (services or applications) hosted on the original node. CSVs thus add flexibility and simplify management.


Virtual hard disk sharing for guest clusters in Windows Server 2012 R2

Before Windows Server 2012 R2, when you wanted to provision storage for a guest cluster (that is, a failover cluster in which the nodes are virtualized), you needed to provision storage from within the operating system of these Hyper-V guests by connecting to an underlying storage infrastructure, such as an iSCSI or Fibre Channel SAN.

However, allowing visibility of an underlying storage infrastructure from a guest OS in Hyper-V is not always ideal. For example, if you are a cloud hosting company you might want to allow customers to create a failover cluster from multiple VMs hosted in your cloud. In this case, you want to be able to provide shared storage to these VMs on demand without allowing customers to see your underlying SAN fabric.


Important

A guest cluster is a virtualized cluster in which the nodes are VMs and the failover clustering feature is installed in the guest operating system of each of those VMs. In contrast, the terms physical host cluster, Hyper-V host cluster, or even just host cluster can be used to refer to a failover cluster in which the nodes are physical computers and the failover clustering feature is installed in the host operating system of each node.


With VHDX sharing, Windows Server 2012 R2 introduces a way to provide shared storage to guest clusters in a way that hides the underlying storage infrastructure. VHDX sharing requires the Hyper-V hosts themselves that are supporting the guest cluster also to be configured in their own failover cluster, so as a prerequisite you must have a (virtualized) guest cluster on top of a (physical) host cluster. After you configure the shared VHDX from the physical host, the virtual hard disk appears as a raw SAS disk that is eligible to be added to the guest cluster from within the guest operating system.

Here’s how to configure the shared VHDX: First, on one of the nodes of the physical host cluster, provision a new disk from shared storage and add it to CSVs as described in the preceding section, “Cluster Shared Volumes.” (You can also provide a new volume to the host cluster by means of an SMB 3.0 share on a Scale-Out File Server.) Next, in Hyper-V Manager, open the settings of the guest VM that is acting as the management node in your guest cluster. Select the SCSI Controller settings and choose to add a new hard disk. The new SCSI disk must be a VHDX file and must be saved in the path to the new CSV or SMB 3.0 share that has been configured for the physical host cluster. Finally, in the settings of the guest VM in Hyper-V Manager, expand the new SCSI disk, select Advanced Features and then click Enable Virtual Hard Disk Sharing.

When you start the VM next, a new raw SAS disk will appear in Disk Management and Server Manager. You can use Failover Cluster Manager in the guest to add this disk to cluster storage in the guest cluster.

Remember the following points about VHDX sharing:

Image The shared virtual disk can act as a data disk or witness disk for the guest cluster. It cannot be used as the operating system disk.

Image The shared virtual disk must be a VHDX file, not a VHD file.

Image The shared virtual disk must be attached to the virtual machine’s SCSI controller, not its IDE controller.

Image When you attach the new virtual hard disk to the SCSI controller, don’t click Apply until you have selected the Enable Virtual Hard Disk Sharing check box in Advanced Features.

Dynamic quorum

Dynamic quorum configuration is a new feature in Windows Server 2012 and Windows Server 2012 R2 in which the number of votes required to reach quorum automatically adjusts to the number of active nodes in the failover cluster. If one or more nodes shuts down, the number of votes to reach quorum in Windows Server 2012 and Windows Server 2012 R2 changes to reflect the new number of nodes. With dynamic quorum, a failover cluster can remain functional even after half of its nodes fail simultaneously. In addition, it’s possible with dynamic quorum for the cluster to remain running with only one node remaining.

The applicability of dynamic quorum to the 70-417 exam is uncertain, mainly because by definition this new feature doesn’t require you to remember any configuration settings. However, if you see a question in which nodes in the cluster are running an earlier version of Windows and “you want to have the number of votes automatically adjust based on the number of available nodes at any one time,” you know you need to upgrade all nodes to Windows Server 2012 or later. In addition, remember that dynamic quorum works only with the following quorum configurations and not with the Disk Only quorum configuration:

Image Node Majority

Image Node and Disk Majority

Image Node and File Share Majority


Image Exam Tip

Remember that the cluster quorum settings determine the number of elements in a failover cluster that must remain online for it to continue running. You access these settings by right-clicking a cluster in the Failover Cluster Manager console tree, selecting More Actions and then clicking Configure Cluster Quorum Settings.


Dynamic witness in Windows Server 2012 R2

A witness in a failover cluster, as you remember, is a disk or file share that holds information about the cluster configuration. The witness can act as a tiebreaker vote that helps to achieve quorum when only half of the nodes in a cluster can communicate with each other. Because of this tiebreaker role for the witness, Microsoft has recommended—until Windows Server 2012 R2—that you configure a witness disk or file share only when your cluster contained an even number of nodes.

However, the extra vote provided by the witness could occasionally cause a problem in versions of Windows Server before Windows Server 2012 R2. The witness did ensure that a majority of votes was possible when exactly 50 percent of the nodes were online and could communicate with each other. However, if one of the original nodes failed, the witness raised the possibility that the remaining votes could be split evenly between two groups, preventing either the majority needed to achieve quorum.

This potential problem is solved in Windows Server 2012 R2 through dynamic witness. With dynamic witness, the witness is active only when the number of online nodes is even. When the number of active nodes is odd, the witness loses its vote.

Because of the new dynamic nature of witness disks and witness file shares in Windows Server 2012 R2, Microsoft recommends that you always configure a witness for failover clusters in Windows Server 2012 R2. Unlike in previous versions of Windows Server, a witness is now always included as a step when you run the Configure Cluster Quorum Wizard and choose the default quorum configuration. The witness is then active only when needed.

Node drain

Node drain is a feature new to Windows Server 2012 and Windows Server 2012 R2 that simplifies the process of shutting a node down for maintenance. In previous versions of Windows, if you wanted to bring a node down for maintenance, you first needed to pause the node and then move all hosted applications and services (now called roles) over to other nodes. With node drain, these two steps are combined into one.

To prepare a node to be shut down for maintenance in this way, first navigate to the Nodes container in the Failover Cluster Manager console tree. Then right-click the node you want to shut down in the details pane, point to Pause and then select Drain Roles, as shown in Figure 10-6.

Image

FIGURE 10-6 Draining roles from a node

To achieve this same result by using Windows PowerShell, use the Suspend-ClusterNode cmdlet.

Automatic node drain on shutdown in Windows Server 2012 R2

In Windows Server 2012 R2, if you choose to shut down a node before draining the roles, the roles will automatically be live-migrated to other nodes before the shutdown is performed. However, this feature is designed mainly as a backup safety mechanism. It is preferable to drain roles manually to make sure these roles have migrated before shutting down a node.

Cluster-aware updating (CAU)

Cluster-aware updating (CAU) is a new feature in Windows Server 2012 and Windows Server 2012 R2 that addresses the difficulty of performing software updates on failover cluster nodes. This difficulty stems from the fact that updating software normally requires a system restart. To maintain the availability of services hosted on failover clusters in previous versions of Windows, you needed to move all roles off one node, update the software on that node, restart the node and then repeat the process on every other node, one at a time. Windows Server 2008 R2 failover clusters could include up to 16 nodes, so this process sometimes had to be repeated as many times. In Windows Server 2012 and Windows Server 2012 R2, failover clusters can scale up to 64 nodes. At this point, the older, manual method of updating software on failover clusters is simply no longer practical.

Instead, Windows Server 2012 and Windows Server 2012 R2 automate the process of updating software for you. To initiate the process of updating a failover cluster, simply right-click the cluster in the list of servers in Server Manager and then select Update Cluster from the shortcut menu, as shown in Figure 10-7.

Image

FIGURE 10-7 Manually updating a cluster

By default, only updates configured through Windows Update are performed. Updates are received as they normally would be, either directly from the Microsoft Update servers or through Windows Software Update Services (WSUS), depending on how Windows Update is configured. Beyond this default functionality, CAU can be extended through third-party plug-ins so that other software updates can also be performed.


More Info

For more information about CAU plugins work, visit http://technet.microsoft.com/en-us/library/jj134213.


The preceding step shows how to trigger an update to a cluster manually. Triggering updates manually might be too straightforward a task to appear on the 70-417 exam. More likely, you could see a question about configuring self-updates. You can access these self-update configuration settings in Failover Cluster Manager by right-clicking the cluster name in the console tree, pointing to More Actions, and then selecting Cluster-Aware Updating, as shown in Figure 10-8.


Note

Manual updating is called remote updating mode when you use this method to update a failover cluster from a remote machine on which the failover cluster management tools have been installed.


Image

FIGURE 10-8 Opening CAU actions

This step opens the Cluster-Aware Updating dialog box, shown in Figure 10-9.

Image

FIGURE 10-9 CAU actions

To configure self-updating for the cluster, click Configure Self-Updating Options beneath Cluster Actions. This step will open the Configure Self-Updating Options Wizard. You can enable self-updating on the cluster on the second (Add Clustered Role) page of the wizard by selecting the option to add the CAU clustered role, with self-updating mode enabled (shown in Figure 10-10).

Image

FIGURE 10-10 Enabling self-updating mode for CAU


Note

The Cluster-Aware Updating dialog box also allows you to perform the following actions:

Image Apply updates to the cluster

Image Preview updates for the cluster

Image Create or modify Updating Run Profile directly (without running a wizard)

Image Generate a report on past Updating Runs

Image Analyze cluster updating readiness


The third (Self-Updating Schedule) page of the wizard lets you specify a schedule for updating. The fourth (Advanced Options) page lets you change profile options, as shown in Figure 10-11. These profile options let you set time boundaries for the update process and other advanced parameters.


More Info

For more information about profile settings for CAU, visit http://technet.microsoft.com/en-us/library/jj134224.aspx.


Image

FIGURE 10-11 Configuring advanced options for Cluster-Aware Self-Updating


More Info

For a more detailed description of CAU, visit http://blogs.technet.com/b/filecab/archive/2012/05/17/starting-with-cluster-aware-updating-self-updating.aspx.



Image Exam Tip

Expect to see a question or two that tests your knowledge of Windows PowerShell cmdlets for failover clusters. For a complete list of those cmdlets, visit “Failover Clusters Cmdlets in Windows PowerShell” at http://technet.microsoft.com/en-us/library/hh847239.aspx.


Active Directory-Detached Clusters in Windows Server 2012 R2

In the first release of Windows Server 2012 (not Windows Server 2012 R2) and earlier versions of Windows Server, every failover cluster requires in Active Directory its own computer object, with a name corresponding to the name of the cluster. This computer object is createdautomatically when the cluster is created. However, in Windows Server 2012 R2, you can now also create a failover cluster that doesn’t require a computer account in AD DS.

The purpose of avoiding this requirement is to reduce the complexity of deploying, managing, and maintaining the cluster. For example, you will no longer require elevated privileges in the domain to create the cluster. You also avoid the cluster failure that would arise from an accidental deletion of the cluster’s computer account.

This new type of failover cluster that doesn’t require a computer account is known as an Active Directory-detached cluster. Note that even though these failover clusters do not require computer objects in Active Directory, they still depend on Active Directory for other services such as authentication. Cluster nodes therefore must still be members of the same Active Directory domain. All nodes, in addition, must be running Windows Server 2012 R2.


Image Exam Tip

Active Directory-detached clusters are not recommended for scenarios that require Kerberos authentication. Authentication of the cluster network name uses NTLM, not Kerberos.



Note

The only workload that is currently supported without qualification for Active Directory-detached clusters is SQL Server. (File server and Hyper-V workloads are currently supported but not recommended.)


To create an Active Directory-detached cluster, you use Windows PowerShell, not Failover Cluster Manager. Use the New-Cluster cmdlet to create the new cluster, along with the -AdministrativeAccessPoint parameter set to a value of Dns. (The Dns value ensures that the cluster name will be registered in DNS. To avoid registering these DNS names, you would use the None value)

For example, the following command creates a two-node Active Directory-detached cluster with the name Cluster1.

New-Cluster Cluster1 –Node Node1,Node2 –StaticAddress 192.168.2.10 -NoStorage
–AdministrativeAccessPoint Dns

After you create the cluster, you can configure and manage it in Failover Cluster Manager. (The cluster may take several minutes to appear in the interface.)

To determine whether a cluster is Active Directory-detached, type the following command at a Windows PowerShell prompt on one of the cluster nodes:

(Get-Cluster).AdministrativeAccessPoint

If the output is Dns, then the cluster is Active Directory-detached. If the output is ActiveDirectoryAndDns, then the cluster is a normal cluster integrated with Active Directory.

Configuring Cluster Properties in Windows PowerShell

You can use Windows PowerShell to view and set properties on entire clusters or on individual nodes. Many of these cluster and node properties, in fact, can only be viewed and configured through Windows PowerShell. To view or configure a cluster property or node property, you can expose it through the Get-Cluster cmdlet.

WitnessDynamicWeight

For example, the WitnessDynamicWeight cluster property relates to the Dynamic Witness feature and determines whether the witness assigned to the cluster currently has a quorum vote. To determine the current status of the quorum witness vote, type the following command at an elevated Windows PowerShell prompt:

(Get-Cluster).WitnessDynamicWeight

If the command returns a value of 1, it indicates the witness currently has a vote. A value of 0 indicates the witness does not currently have a vote.

DynamicQuorum

The DynamicQuorum cluster property determines whether the Dynamic Quorum feature is enabled or disabled. (It is enabled by default in Windows Server 2012 R2.) To disable Dynamic Quorum, type the following:

(Get-Cluster).DynamicQuorum = 0

To re-enable Dynamic Quorum, type:

(Get-Cluster).DynamicQuorum = 1

DatabaseReadWriteMode

The DatabaseReadWriteMode cluster property is used to configure the Global Update Manager. The Global Update Manager is the software component within the Failover Clustering feature that is used to manage cluster database updates, such as the type that occurs when a node goes offline. Before Windows Server 2012 R2, the Global Update Manager always ensured that all nodes received the update before the update was committed to the cluster database. The Cluster Service therefore always read the database from the local node because it was known to be up-to-date.

Beginning in Windows Server 2012 R2, however, you can now alter the mode in which the Global Update Manager handles database updates and reads. The way that cluster database updates were handled in previous versions of Windows Server is now known as “All (Write) And Local (Read)” mode, and this mode is associated with a DatabaseReadWriteMode property value of 0. The All (Write) And Local (Read) mode is still the default in Windows Server 2012 R2 except when the clustered role is a VM.

The default mode when the clustered role is a VM is “Majority (Read And Write)”, which is associated with a DatabaseReadWriteMode property value of 1. In this latter mode, the Global Update Manager commits an update to the cluster database when the update is received by a majority of nodes. Instead of reading automatically from the local node, the Cluster Service in Majority (Read And Write) mode checks a majority of the running nodes and reads the data with the latest timestamp.

To view the currently active Global Update Manager mode, type the following:

(Get-Cluster).DatabaseReadWriteMode

To set the Global Update Manager mode to All (Write) And Local (Read), type the following:

(Get-Cluster).DatabaseReadWriteMode = 0

To set the Global Update Manager mode to Majority (Read And Write), type the following:

(Get-Cluster).DatabaseReadWriteMode = 1

NodeWeight

Finally, in some cases, you might want to remove the quorum vote from a particular node or restore that vote after it has been removed. The ability to cast a vote for quorum is governed by the NodeWeight node property. A value of 1 (the default) indicates that the node has a right to vote for quorum. A value of 0 indicates that it does not.

To view the vote status of a node named Node 2, you would type the following:

(Get-ClusterNode Node2).NodeWeight

To remove the quorum vote from that node, type the following:

(Get-ClusterNode Node2).NodeWeight = 0

To give the quorum vote back to the same node, type the following:

(Get-ClusterNode Node2).NodeWeight = 1


Image Exam Tip

Even though Network Load Balancing (NLB) hasn’t changed significantly since Windows Server 2008 and isn’t mentioned in this chapter, be sure to review the feature and its configurable options. For example, remember that in port rules for Network Load Balancing clusters, the Affinity setting determines how you want multiple connections from the same client handled by the NLB cluster. “Affinity: Single” redirects clients back to the same cluster host. “Affinity: Network” redirects clients from the local subnet to the cluster host. “Affinity: None” doesn’t redirect multiple connections from the same client back to the same cluster host.



Image Exam Tip

Make sure you know how to set the cluster properties described in the previous section.


Objective summary

Image In Windows Server 2012 and Windows Server 2012 R2, you can create storage pools for failover clusters. Cluster storage pools are compatible only with SAS disks.

Image CSVs are a new type of storage used only in some failover clusters. With CSVs, each node on a failover cluster creates its own virtual disk on the volume. The storage is then accessed through pathnames that are common to every node on the cluster.

Image In Windows Server 2012 R2, you can under certain circumstances enable sharing on a VHDX file that is attached to a guest VM in Hyper-V Manager. The VHDX when shared in this way is then exposed to the guest operating system as a raw SAS disk. The SAS disk is eligible to be added to a cluster configured in the guest OS.

Image With cluster-aware updating in Windows Server 2012, you can automate the process of updating Windows in a failover cluster.

Image In Windows Server 2012 R2, you can create an Active Directory-detached failover cluster, which doesn’t create a computer object for the cluster in Active Directory.

Objective review

Answer the following questions to test your knowledge of the information in this objective. You can find the answers to these questions and explanations of why each answer choice is correct or incorrect in the “Answers” section at the end of the chapter.

1. You are designing storage for a failover cluster on two servers running Windows Server 2012 R2. You want to provision disks for the cluster that will enable you to create a storage pool for it. Which of the following sets of physical disks could you use to create a storage pools for the failover cluster?

A. Three individual disks in an iSCSI storage array without any RAID configuration

B. Four disks in an iSCSI storage array configured as a RAID 5

C. Three individual disks in a SAS storage array without any RAID configuration

D. Four disks in a SAS storage array configured as a RAID 5

2. You are an IT administrator for Adatum.com. The Adatum.com network includes 50 servers and 750 clients. Forty of the servers are virtualized. To provide storage for all servers, the Adatum.com network uses an iSCSI-based storage area network (SAN).

You are designing storage for a new VM hosted in a failover cluster. Your priorities for the storage are to simplify management of SAN storage and to minimize downtime in case of node failure.

What should you do?

A. Use Server Manager to create a storage pool.

B. Keep VM storage on a CSV.

C. Provision volumes from an external SAS disk array instead of the iSCSI SAN.

D. Assign a mirrored volume to the cluster.

3. You have configured high availability for a cluster-aware application named ProseWareApp in a two-node failover cluster named Cluster1. The physical nodes in Cluster1 are named Node1 and Node2 and they are both running Hyper-V in Windows Server 2012 R2. Node1 is currently the active node for ProseWareApp.

You want to configure Cluster1 to perform critical Windows Updates with a minimum of administrative effort and a minimum of downtime for ProseWareApp users. What should you do?

A. Drain the roles on Node1 and then start Windows Update on Node1.

B. In Server Manager on Node1, right-click Cluster1 and select Update Cluster.

C. Configure cluster-aware updating to add the CAU clustered role to Cluster1 with self-updating mode enabled.

D. Configure Task Scheduler to run Windows Update daily on Node1 outside of business hours.

Objective 10.2: Manage failover clustering roles

This objective covers the configuration of roles in failover clusters. Within this area, there are three new features that you are likely to be tested on: the Scale-Out File Server role, role priority, and VM monitoring.


This section covers the following topics:

Image Create a Scale-Out File Server (SoFS)

Image Assign role priority

Image Configure VM monitoring


Creating a Scale-Out File Server (SoFS)

Windows Server 2012 and Windows Server 2012 R2 let you configure two different types of file server roles for high availability: a file server for general use (which is the same option available in previous versions of Windows Server) and a new Scale-Out File Server For Application Data alternative. Both of these options are provided on the File Server Type page of the High Availability Wizard, as shown in Figure 10-12. Each of these clustered file server types is used for different purposes and they can both be hosted on the same node at the same time.


More Info

For more information about SoFS, visit http://technet.microsoft.com/en-us/library/hh831349.


Image

FIGURE 10-12 Selecting a Scale-Out File Server for the File Server role type

You will likely see a question a question or two on the 70-417 exam that tests basic knowledge about Scale-Out File Servers (SoFS). Here’s what you need to remember:

Image SoFS clusters are not designed for everyday user storage but for applications, such as SQL database applications, that store data on file shares and keep files open for extended periods of time.

Image Client requests to connect to an SoFS cluster are distributed among all nodes in the cluster. For this reason, SoFS clusters can handle heavy workloads that increase proportionally to the number of nodes in the cluster.

Image SoFS clusters use only CSVs for storage.

Image SoFS clusters are not compatible with BranchCache, Data Deduplication, DFS Namespace servers, DFS Replication, or File Server Resource Manager.


Image Exam Tip

Learn both the benefits and limitations of SoFS well. If a scenario requires a highly available file server for application data and all added nodes must remain online and able to respond to client requests, an SoFS is a good fit. But don’t be tricked into selecting SoFS as the file server type for a new clustered file server just because the question states it will host application data. If the file server is also used with incompatible features (such as BranchCache, DFS, or File Server Resource Manager), or if no CSVs are available, you must choose File Server For General Use as the file server type.


Assign role startup priority

Unlike previous versions of Windows Server, Windows Server 2012 and Windows Server 2012 R2 let you assign one of four startup priorities to clustered roles: High, Medium, Low, or No Auto Start. Medium is the default priority. In the case of node failure, this priority setting determines the order in which roles are failed over and started on another node. A higher priority role both fails over and starts before the role of the next highest priority. If you assign the No Auto Start priority to a role, the role is failed over after the other roles but is not started on the new node. The purpose of startup priority is to ensure that the most critical roles have prioritized access to resources when they fail over to another node.

To change the startup priority of a role, right-click the role in Failover Cluster Manager, point to Change Startup Priority and select the desired priority, as shown in Figure 10-13.

Image

FIGURE 10-13 Setting the startup priority of a role

Role startup priority is a fairly easy feature to understand that is also likely to appear on the 70-417 exam. Be sure you remember the No Auto Start priority especially because that’s the only priority setting with a meaning that isn’t made obvious by its name.


Image Exam Tip

You need to understand the difference between startup priority settings and preferred owner settings. Startup priority settings determine the order in which roles should be failed over and started after node failure. Preferred owner settings determine which node, if available, should handle the client requests for a role both before and after a node failure.


Virtual machine application monitoring

Windows Server 2012 and Windows Server 2012 R2 introduce the ability for a Hyper-V host to monitor the health of chosen services running on a clustered VM. If the Hyper-V host determines that a monitored service in a guest VM is in a critical state after normal attempts to restart the service within the guest OS have failed, the host is able to trigger a recovery. The Cluster service first attempts to recover the VM by restarting it gracefully. Then, if the monitored service is still in a critical state after the VM has restarted, the Cluster service fails the VM over to another node.

To monitor VM services with the VM Monitoring feature in Windows Server 2012 and Windows Server 2012 R2, the following requirements must be met:

Image Both the Hyper-V host and its guest VM must be running Windows Server 2012 or later.

Image The guest VM must belong to a domain that trusts the host’s domain.

Image The Failover Clustering feature must be installed on the Hyper-V host. The guest VM must also be configured as a role in a failover cluster on the Hyper-V host.

Image The administrator connecting to the guest through Failover Cluster Manager must be a member of the local administrators group on that guest.

Image All firewall rules in the Virtual Machine Monitoring group must be enabled on the guest, as shown in Figure 10-14.

Image

FIGURE 10-14 Enabling firewall rules for VM monitoring

To configure VM monitoring, right-click the VM in Failover Cluster Manager, point to More Actions, and then select Configure Monitoring, as shown in Figure 10-15.

Image

FIGURE 10-15 Configuring the monitoring of a VM application

In the Select Services dialog box that opens, select the services that you want to monitor, as shown in Figure 10-16.

When you configure a service to be monitored as shown in Figure 10-16, the failure of the service is still primarily handled by the guest OS by default, not by the Cluster service on the Hyper-V host. The general response to a service failure is determined by the service properties accessible through the Services console within the guest. As you might remember, the Recovery tab in a service’s properties allows you to specify the local computer’s response if that service fails. You can specify a First Failure response, a Second Failure response, and the Subsequent Failures response. The default settings are shown in Figure 10-17.

Image

FIGURE 10-16 Selecting services to be monitored in a VM

Image

FIGURE 10-17 Default recovery properties for a service

As shown in Figure 10-17, the default recovery properties for a service are configured with the Restart The Service setting for First Failure and Second Failure. These attempts to restart the service are handled by the local operating system and not the Cluster service on the Hyper-V host.

The default setting for Subsequent Failures, also shown in Figure 10-17, is Take No Action. The Take No Action setting is where VM application monitoring comes in. If the service has been configured for monitoring, the Cluster service on the Hyper-V host will move in to take over service recovery at whichever stage the Take No Action setting is configured.

In some circumstances, you might want to redirect the Cluster service recovery to a third-party application that allows you more control over the recovery process. In this case, you can disable the default behavior to restart and fail over the VM. You can achieve this first by opening the resource properties of the guest VM in Failover Cluster Manager, as shown in Figure 10-18.

Image

FIGURE 10-18 Modifying properties of a clustered VM


Image Exam Tip

Remember the significance of the Take No Action setting. For example, if you want a clustered VM to fail over to another node when a monitored service in that VM fails the first time (as opposed to the third time, which is the default), you need to configure the recovery settings for the service so that First Failure is set to Take No Action.


Then on the Settings tab of the Properties dialog box shown in Figure 10-19, clear the Enable Automatic Recovery For Application Health Monitoring check box. The Cluster service will still log an error when a monitored service is in a critical state, but it will no longer attempt to restart or fail over the VM.

Image

FIGURE 10-19 The setting to enable automatic recovery for a monitored VM application


Image Exam Tip

To migrate all clustered roles from a cluster running on Windows Server 2008 R2 to a new cluster running on Windows Server 2012 or Windows Server 2012 R2, right-click the cluster in Failover Cluster Manager on the source cluster and then select Migrate Roles. This step will start the Migrate A Cluster Wizard and guide you through the migration process.


Objective summary

Image A Scale-Out File Server (SoFS) is a new type of role for which you can configure high availability in a failover cluster. An SoFS can be used to ensure that an application that connects to a file share doesn’t generate errors during failover. In addition, an SoFS works on many live nodes at a time, so every additional node you add enables the cluster to handle more requests. An SoFS is not well-suited for use with file storage for users.

Image With startup priority, you can determine the order in which roles should be failed over from one node to the next in case of node failure.

Image Windows Server 2012 and Windows Server 2012 R2 let you monitor the health of an application running in a guest VM. When a monitored application reaches a critical state, the host computer can trigger the VM to restart. If the application remains in a critical state after the system restart, the host computer will trigger failover to another node.

Objective review

Answer the following questions to test your knowledge of the information in this objective. You can find the answers to these questions and explanations of why each answer choice is correct or incorrect in the “Answers” section at the end of the chapter.

1. You work as a network administrator for Adatum.com. The Adatum.com network includes 25 servers running Windows Server 2012 R2 and 400 clients running Windows 8.1.

You want to create a failover cluster to support a file share used by a resource-intensive application. Your priorities for the failover cluster are to prevent file handling errors in the event of failover and to maintain high performance as the usage of the application grows. Which role and storage type should you configure for the failover cluster? (Choose two. Each answer represents part of the solution.)

A. Configure as the role for the failover cluster a file server for general use.

B. Configure as the role for the failover cluster an SoFS.

C. Store the share on an NTFS volume provisioned from shared storage. Do not add the volume to CSVs.

D. Store the share on a CSV.

2. You work as a network administrator for Fourth Coffee, Inc. The Fourthcoffee.com network spans offices in five cities in North America. All servers in the network are running Windows Server 2012 R2 and all clients are running Windows 8.1.

You want to create a failover cluster to support a new file share that will be used by members of the marketing team in all branch offices. Your requirements for the failover cluster and the file share in general are to minimize downtime if a node fails, to minimize storage space needed for the share, to reduce or eliminate the possibility of file conflicts, and to minimize the amount of data transferred over wide area network (WAN) links.

How should you configure the failover cluster and file server? (Choose all that apply.)

A. Configure as the role for the failover cluster a file server for general use.

B. Configure as the role for the failover cluster an SoFS.

C. Enable Data Deduplication on the file share.

D. Enable BranchCache on the file share.

3. You want to create a two-node failover cluster to provide high availability for a virtual machine. The VM will host an important line-of-business (LOB) application used often by members of your organization throughout the day. You want to configure VM monitoring of the application so that the VM will restart if the application is found to be in a critical state and fail over to the other node if the application still is in a critical state after the system restart.

Which of the following is not a requirement of meeting this goal?

A. The host Hyper-V server needs to be running Windows Server 2012 or later.

B. The guest VM needs to be running Windows Server 2012 or later.

C. The host and the guest need to be members of the same domain.

D. The guest VM needs to have enabled the firewall rules in the Virtual Machine Monitoring group.

Objective 10.3: Manage virtual machine (VM) movement

Windows Server 2012 and Windows Server 2012 R2 both expand and improve on VM migration features that were introduced in Windows Server 2008 R2. The two biggest improvements are the addition of live migration in a nonclustered environment and storage migration.


This section covers the following topics:

Image Configuring and performing live migration in a failover cluster

Image Configuring and performing live migration outside of a failover cluster

Image Performing storage migration

Image VM network health protection in Windows Server 2012 R2


Live migration

Live migration is a feature that first appeared in Windows Server 2008 R2. Live migration lets you move a running VM from one Hyper-V host to another without any downtime. Originally, this feature was available only for VMs hosted in failover clusters, but in Windows Server 2012 and Windows Server 2012 R2, you can now perform live migration of a VM outside of a clustered environment. However, the process of performing live migration is different inside and outside of clusters, and each of these live migration types has slightly different requirements.

Live migration requires a few configuration steps that you need to understand for the exam. To start configuring this feature, open Hyper-V Settings for each Hyper-V host, as shown in Figure 10-20.

Image

FIGURE 10-20 Configuring Hyper-V settings

In the Hyper-V Settings dialog box that opens, click Live Migrations on the menu on the left. The associated live migration settings are shown in Figure 10-21.

Image

FIGURE 10-21 Live migration settings

Live migrations are not enabled by default. To enable this feature, perform the following four configuration steps in this dialog box:

1. Select the Enable Incoming And Outgoing Live Migrations check box for both the source and destination hosts.

2. For live migrations outside of clustered environments, you need to choose an authentication protocol on both host servers: either Credential Security Support Provider (CredSSP) or Kerberos.

Image CredSSP The advantage of this choice is that it requires no configuration. The limitation of choosing CredSSP as the authentication protocol, however, is that you need to be logged on to the source computer when you perform a live migration. You can’t perform live migrations through Hyper-V Manager on a remote computer.

Image Kerberos The advantage of choosing Kerberos as the authentication protocol is that you don’t need to be logged on to a source computer to perform the live migration. The disadvantage of using Kerberos as the authentication protocol for live migrations is that it requires configuration. Specifically, aside from selecting Kerberos in Hyper-V Settings, you need to adjust the properties of the source and destination computer accounts in Active Directory Users and Computers, on the Delegation tab. On each computer account, select the option to trust this computer for delegation to specified services only, and then add the following two services from the other computer: CIFS and Microsoft Virtual System Migration Service. The actual configuration required is shown in Figure 10-22. Note that this configuration step is also known as configuring constrained delegation. Expect to see a question about configuring constrained delegation on the 70-417 exam.

Image

FIGURE 10-22 Configuring constrained delegation

3. Set a value for the maximum number of simultaneous live migrations you want to allow on the network. This is a new feature of Windows Server 2012 and Windows Server 2012 R2. In Windows Server 2008 R2, you were limited to one live migration at a time. (In the real world, you can estimate 500 Mbps of network bandwidth required per individual live migration. In a Gigabit Ethernet network, you can safely leave the default value of 2.)

4. Add a list of subnets or individual IP addresses from which you want to allow live migrations. Live migration does not provide data encryption of VMs and storage as they are moved across the network, so security is an important consideration. Do not leave the default selection to use any available network for live migration unless you are in a testing environment.

Live migration in a failover cluster

Although CSVs are not required for VM storage when you perform live migration in a failover cluster, CSVs are nonetheless highly recommended. If the VM is not already stored in a CSV, you should move it there to prepare for clustered live migration.

Moving VM Storage to a CSV

To move VM storage to a CSV, right-click the VM in Failover Cluster Manager, point to Move, and then click Virtual Machine Storage, as shown in Figure 10-23.

Image

FIGURE 10-23 Moving virtual machine storage

Then, in the Move Virtual Machine Storage dialog box that opens, shown in Figure 10-24, select the VM in the top pane and then drag it to a CSV folder in the bottom left pane. Click Start to begin the copy operation.

Image

FIGURE 10-24 Moving VM storage to a CSV

Performing live migration

After the transfer is complete, you can perform a live migration as long as the Hyper-V environments are the same on the source and destination nodes, including the names of virtual switches in both locations. To perform the live migration, in Failover Cluster Manager, right-click the clustered VM, point to Move, point to Live Migration, and then click Select Node from the shortcut menu, as shown in Figure 10-25. (Note also the Best Possible Node option, which selects a destination node for you.)

Image

FIGURE 10-25 Performing a live migration in a failover cluster

In the Move Virtual Machine Dialog box that opens, shown in Figure 10-26, select the destination node in the failover cluster to which you want to transfer the running VM and then click OK to start the process.

Image

FIGURE 10-26 Selecting a destination node for live migration

You can keep track of the migration status in Failover Cluster Manager, as shown in Figure 10-27.

Image

FIGURE 10-27 Viewing live migration status.

During and after migration, the VM continues to operate without interruption.


Note

First introduced in Windows Server 2008, Quick Migration is still an available option when you choose to move a clustered VM in Windows Server 2012 or Windows Server 2012 R2 from one node to another. Quick Migration saves the VM state and resumes the machine on the destination node. The advantage of Quick Migration is that it is a faster process from start to finish for the VM you are migrating, and it requires less network bandwidth. The disadvantage of Quick Migration is that the VM is briefly brought offline during the migration process. If minimizing downtime is not a priority and you want to transfer a VM as quickly as possible, then Quick Migration is the best option.


Live migration outside of a clustered environment

Nonclustered live migration is a new feature in Windows Server 2012 and Windows Server 2012 R2 in which you can move a running VM from one Hyper-V host to another, with no downtime, outside of a clustered environment. The feature does require that the source and destination Hyper-V hosts belong to domains that trust each other. However, it doesn’t require SAN storage or a clustered environment. It’s also worth noting that a disadvantage of nonclustered live migration, compared to clustered live migration, is that the process takes much longer because all files are copied from the source to the destination host. (An exception to this rule is if the VM and its storage are kept on a file share and do not need to be copied from one host to the other during the migration process.)

Once you have configured live migration settings in Hyper-V Manager on the source and destination computers, you can perform the live migration. It’s a simple procedure. In Hyper-V Manager, right-click the running VM you want to live migrate and select Move from the shortcut menu, as shown in Figure 10-28.

Image

FIGURE 10-28 Initiating a live migration outside of a clustered environment

In the wizard that opens, select the Move The Virtual Machine option, as shown in Figure 10-29.

Image

FIGURE 10-29 Live migrating a VM in a nonclustered environment

Then, specify the destination server, as shown in Figure 10-30.

Image

FIGURE 10-30 Choosing a destination host for live migration in a nonclustered environment

You have three options for how you want to move the VM’s items when you perform the live migration, as shown in Figure 10-31. First, you can move all of the VM’s files and storage to a single folder on the destination computer. Next, you can choose to move different items to different folders in a particular way that you specify. Finally, you can migrate just the VM while leaving the storage in place. Note that this option requires the VM storage to reside on shared storage such as an iSCSI target, a fact that could easily serve as the basis for an incorrect answer choice on a test question.

Image

FIGURE 10-31 Moving a VM to a single destination folder

Processor compatibility

One potential problem that can arise when you perform a live or quick migration is that the processor on the destination Hyper-V host supports different features than does the processor on the source host. In this case, you might receive the error message shown in Figure 10-32, and the migration fails. (A failed migration does not negatively affect the running VM. The result of the failure is simply that the VM is left running on the source computer.)

Image

FIGURE 10-32 An error indicating processor feature incompatibility

Neither live migration nor quick migration is supported between hosts with processors from different manufacturers. However, if the processors on the source and destination computers are from the same manufacturer and are found to support incompatible features, you have the option of limiting the virtual processor features on the VM as a way to maximize compatibility and improve the chances that the migration will succeed. (Again: This setting does not provide compatibility between different processor manufacturers.)

To enable processor compatibility, expand the Processor settings in the VM’s settings dialog box and select the Compatibility node, as shown in Figure 10-33. Then, select the Migrate To A Physical Computer With A Different Processor Version check box. Alternatively, you can run the following command at a Windows PowerShell prompt:

Set-VMProcessor VMname -CompatibilityForMigrationEnabled $true

If you see a question about live migration in which you receive an error indicating trouble related to processor-specific features, you now know how to handle it: Just enable the processor compatibility setting.


Image Exam Tip

You should expect to see questions about VM migration in which the fact that the source and destination hosts have different processor manufacturers is located somewhere in a table or list. When the processor manufacturers are different, your best option for migration is to manually export it from the source machine and import it on the destination machine. You can’t use live migration or quick migration. In Windows PowerShell, you perform these manual export and import operations with the Export-VM and Import-VM cmdlets.


Image

FIGURE 10-33 Enabling processor compatibility for migration

Virtual switch name matching

Another common problem that can occur when you attempt to perform a live migration is that the destination Hyper-V environment doesn’t provide virtual switches with names that exactly match those in the Hyper-V environment on the source computer. This problem is detected as you complete the Move Wizard. For each snapshot of the source VM that defines a virtual switch without an exact equivalent on the destination Hyper-V host, you are given an opportunity to choose another virtual switch on the destination that the VM should use in place of the original. This step is shown in Figure 10-34.

Image

FIGURE 10-34 Matching a virtual switch on a destination host for live migration

After you make the required substitutions, the wizard begins the live migration when you click Finish on the final page, as shown in Figure 10-35.

Image

FIGURE 10-35 A live migration in progress in a nonclustered environment


More Info

For more information about live migration in Windows Server 2012 and Windows Server 2012 R2, visit http://technet.microsoft.com/en-us/library/jj134199.aspx.


Storage migration

Another useful new feature in Windows Server 2012 and Windows Server 2012 R2 is the live migration of VM storage. With this option, you can move the data associated with a VM from one volume to another while the VM remains running. This option is useful if storage space is scarce on one volume or storage array and is more plentiful on another source of storage. An important advantage of storage-only live migration to remember for the exam is that unlike live migration, it can be performed in a workgroup environment because the source and destination servers are the same.

To perform storage migration, use the Move option to open the Move Wizard, as you would do to begin the process of live-migrating the VM. Then, on the Choose Move Type page of the wizard, select Move The Virtual Machine’s Storage, as shown in Figure 10-36.

Image

FIGURE 10-36 Choosing a migration type

You have three options for how to migrate the storage of the VM, as shown in Figure 10-37. The first option is to move all storage to a single folder on the destination volume.

Image

FIGURE 10-37 Moving storage to a single folder

The second option allows you to select which particular storage items you want to migrate, such as snapshot data or the smart paging folder, as shown in Figure 10-38. The third and final option allows you to specify particular VHDs to migrate only.

For the exam, what’s most important to remember about storage migration is that this feature provides an option that is often the best way to solve a problem. If space runs out for a running VM, it’s not necessarily a good idea to migrate that VM to another server. No other server might be available, for example, and you might want to spare your organization the unnecessary expense of buying a new one. In this case, it’s often more prudent simply to attach a new disk array to the server and move the VM storage to this newly available space.


More Info

For a good review of Live Migration and Storage Migration, see the TechEd Australia session on the topic at: http://channel9.msdn.com/Events/TechEd/Australia/2012/VIR314.


Image

FIGURE 10-38 Selecting VM items to migrate to a new volume

VM network health protection in Windows Server 2012 R2

In Windows Server 2012 R2, there’s a new option named Protected Network in the advanced Network Adapter settings of a VM in Hyper-V Manager, as shown in Figure 10-39. You use this setting to protect a highly available VM from the failure of the associated network connection.

Image

FIGURE 10-39 Configuring a highly available VM for network protection

Before Windows Server 2012 R2, if a clustered VM were to lose network connectivity in a way that didn’t affect its heartbeat connection, the VM would remain on the same physical node and not trigger a failover. Such a network disruption could prevent clients from connecting to the VM.


Note

Failover cluster nodes exchange heartbeats once per second by default. The number of heartbeats that can be missed before a failover is triggered is called the heartbeat threshold.


When you select the Protected Network option on the settings of a virtual network adapter, the physical node monitors that network for disruptions. If that network connection is broken, the VM is automatically live-migrated to another available node.

Objective summary

Image Live migration is a feature in which a running VM is transferred from one host computer to another without any downtime. In Windows Server 2012 and Windows Server 2012 R2, live migration can be performed inside or outside a failover cluster, but live migration within a failover cluster is much faster.

Image When configuring servers for live migration outside of a failover cluster, you must choose an authentication protocol, CredSSP or Kerberos. CredSSP needs no configuration, but it requires you to trigger the live migration while logged in to the source host. Kerberos allows you to trigger the live migration from a remote host, but it requires you to configure constrained delegation for the source and destination hosts.

Image Windows Server 2012 and Windows Server 2012 R2 introduce storage migration for VMs. With storage migration, you can move all of the storage associated with a running VM from one disk to another without any downtime.

Image In Windows Server 2012 R2, you can enable network protection on a network adapter in a VM’s settings. If the VM is clustered and a disconnection is then detected on the selected network, a live migration of the VM is triggered to another cluster node.

Objective review

Answer the following questions to test your knowledge of the information in this objective. You can find the answers to these questions and explanations of why each answer choice is correct or incorrect in the “Answers” section at the end of the chapter.

1. You are a network administrator for Contoso.com. You have recently upgraded all of your servers to Windows Server 2012 R2. Your manager has indicated that he wants to start testing the live migration feature in a nonclustered environment so that you can eventually take advantage of this functionality in production.

You create a small test network consisting of two Hyper-V servers running Windows Server 2012 named Host1 and Host2. The hardware and software settings on these two physical servers exactly match those of two physical servers in your production network. Host1 is currently hosting a guest VM named VM1.

You enable live migration on both servers and configure CredSSP as the authentication protocol. You then log on locally to Host1 and initiate a live migration of VM1 from Host1 to Host2. You receive an error message indicating that the VM is using processor-specific features not supported on the destination physical computer.

You want to perform a live migration successfully in your test network so that you will know what is required to use this feature successfully in production. What should you do?

A. Configure constrained delegation for Host1 and Host2.

B. Disable VM monitoring on VM1.

C. Configure Kerberos as the authentication protocol on Host1 and Host2.

D. On Host1, run the following command:

Set-VMProcessor VM1 -CompatibilityForMigrationEnabled $true

2. You are a network administrator for Adatum.com. You have recently upgraded all of your servers to Windows Server 2012 R2. Your manager has indicated that she wants to start testing the live migration feature so that you can eventually take advantage of this functionality in production.

You create a small test network consisting of two Hyper-V servers running Windows Server 2012 R2 named VHost1 and VHost2. The hardware and software settings on these two physical servers exactly match those of two physical servers in your production network. VHost2 is currently hosting a guest VM named VM2.

You enable live migration on both servers and configure Kerberos as the authentication protocol. You then log on locally to Host1 and initiate a live migration of VM1 from VHost2 to VHost1. The live migration fails and you receive an error indicating “No credentials are available in the security package.”

You want to perform a live migration successfully in your test network so that you will know what is required to use this feature successfully in production. You also want to initiate live migrations when you are not logged on to the source host server. What should you do next?

A. Configure constrained delegation for VHost1 and VHost2.

B. Disable VM monitoring on VM2.

C. Configure CredSSP as the authentication protocol on VHost1 and VHost2

D. On VHost1, run the following command:

Set-VMProcessor VM2 -CompatibilityForMigrationEnabled $true

3. You are a network administrator for Proseware.com. One of your servers is named HV1 and is running Windows Server 2012 with the Hyper-V role. HV1 is hosting 10 virtual machines on locally attached storage. It is not a member of any domain.

The available storage used by the 10 guest VMs on HV1 is close to being depleted. At the current rate of growth, the current physical disks attached to HV1 will run out of space in three months.

You want to provide more space to your guest VMs. How can you solve the storage problem with a minimum financial expense and minimum impact on users?

A. Perform a quick migration of the VMs on HV1 to a new server with more space.

B. Perform a live migration of the VMs on HV1 to a new server with more space.

C. Perform a storage migration of the VMs on HV1 to a new storage array with ample storage space.

D. Attach a new storage array with ample storage space to HV1 and expand the VHD files used by the guest VMs.


Image Thought experiment: Configuring and managing high availability at Proseware

You are a network administrator for Proseware.com, a software company with offices in several cities. You are designing high availability for certain applications and services at the Philadelphia branch office. You have the following goals:

Image You want to ensure that two domain controllers from the Proseware.com domain remain online with high availability in the Philadelphia branch office, even if one server experiences a catastrophic failure or is brought down for maintenance. (The domain controllers will not host any operations master roles.)

Image You want to ensure that a heavily used LOB application can withstand the failure of one server without experiencing any downtime or file handling errors, even during failover. The LOB application is not cluster-aware. It also frequently reads and writes data stored on a network share.

With these details in mind, answer the following questions. You can find the answers to these questions in the “Answers” section.

1. How many physical servers will you need to support your requirements, at a minimum?

2. How can you best provide high availability for file sharing?

3. How can you best provide high availability for the LOB application?

4. Which of your goals require Windows Server 2012 or Windows Server 2012 R2, as opposed to an earlier version of Windows Server?


Answers

This section contains the answers to the Objective Reviews and the Thought Experiment.

Objective 10.1: Review

1. Correct answer: C

A. Incorrect: A cluster storage pool can only be created from SAS disks.

B. Incorrect: A cluster storage pool can only be created from SAS disks. In addition, a cluster storage pool is incompatible with external RAIDs.

C. Correct: To create a cluster storage pool, you need three independent SAS disks that are not configured with any RAID or governed by any disk subsystem.

D. Incorrect: You cannot create a cluster storage pool from disks that are configured as part of a RAID or governed by any disk subsystem.

2. Correct answer: B

A. Incorrect: Creating a storage pool by itself might simplify management of SAN storage, but it won’t minimize downtime in case of node failure. In addition, the SAN storage cannot be configured as a storage pool for the cluster because it is iSCSI based. Only SAS storage can be used for a cluster storage pool.

B. Correct: Keeping VM storage on a CSV will optimize live migration of the VM in case of node failure and minimize downtime. CSVs will also simplify management of SAN storage by allowing multiple failover cluster nodes to share LUNs.

C. Incorrect: If you provision volumes from a SAS array, you will later be able to create a storage pool for the cluster, which might simplify management of storage. However, using a SAS array will not minimize downtime in case of node failure.

D. Incorrect: Assigning a mirrored volume to the cluster might prevent node failure if one disk fails, but it will not minimize downtime if a node does fail. In addition, it will not simplify management of SAN storage.

3. Correct answer: C

A. Incorrect: This solution only performs updates once on Node1 only, not the entire cluster.

B. Incorrect: This solution only updates the cluster once. It doesn’t minimize administrative effort because you would need to do it repeatedly.

C. Correct: This solution configures Cluster1 to perform Windows Updates automatically and regularly on both nodes in the cluster.

D. Incorrect: This solution performs updates only on Node1, not the entire cluster.

Objective 10.2: Review

1. Correct answers: B, D

A. Incorrect: A traditional file server for general use is best suited for users, not resource-intensive applications. In addition, a traditional file server would not easily allow you to handle an increased load as the usage of the file share increased.

B. Correct: A Scale-Out File Server allows an application to maintain file handles even during failover, which minimizes application errors. In addition, an SoFS allows you to keep all nodes active and to add additional nodes as needed to handle an increased load.

C. Incorrect: A Scale-Out File Server requires CSV storage. Choosing this storage type would not allow you to meet your requirements or reducing errors and maintaining high performance.

D. Correct: A Scale-Out File Server requires CSV storage.

2. Correct answers: A, C, D

A. Correct: A File Server for general use is the more suitable role to provide high availability for a file share that users (as opposed to applications) will use for file storage. In addition, only the File Server role is compatible with Data Deduplication and BranchCache.

B. Incorrect: An SoFS is not compatible with Data Deduplication or BranchCache, two features that will help you meet your requirements for the share.

C. Correct: Data Deduplication will help minimize storage space requirements.

D. Correct: BranchCache will minimize the amount of data transferred over WAN links and prevent file conflicts.

3. Correct answer: C

A. Incorrect: VM monitoring does indeed require Windows Server 2012 or later to be running on the host Hyper-V server and failover cluster node.

B. Incorrect: VM monitoring requires Windows Server 2012 or later to be running on the clustered VM.

C. Correct: The host and guest do not need to be members of the same domain. However, the two domains need to trust each other.

D. Incorrect: The firewall rules in the Virtual Machine Monitoring group do need to be enabled on the clustered VM.

Objective 10.3: Review

1. Correct answer: D

A. Incorrect: Constrained delegation is required for Kerberos authentication. You have configured CredSSP as the authentication protocol. In addition, you have received an error related to processor compatibility, not authentication.

B. Incorrect: VM monitoring isn’t incompatible with live migration, so it wouldn’t generate an error such as this one.

C. Incorrect: There is no reason to change the authentication protocol to Kerberos under these circumstances. CredSSP allows you to initiate a live migration when you are logged on locally to the source host.

D. Correct: If you enabled processor compatibility on the VM, the virtual processor will use only the features of the processor that are available on all versions of a virtualization-capable processor by the same processor manufacturer. You would see the error described if each host server used a different processor from the same manufacturer.

2. Correct answer: A

A. Correct: When you choose Kerberos as the authentication protocol, you need to configure constrained delegation on the computer accounts for the source and destination computers.

B. Incorrect: VM monitoring is not incompatible with live migration and would not generate an error such as the one described.

C. Incorrect: CredSSP as an authentication protocol would not enable you to initiate live migrations when you are not logged on to the source host server.

D. Incorrect: The error received was not related to processor compatibility, so this step would not fix the problem.

3. Correct answer: C

A. Incorrect: A quick migration is possible only in a failover cluster environment. In addition, purchasing a new server with ample new storage is unnecessarily costly compared to purchasing only new storage.

B. Incorrect: You cannot perform a live migration from a computer outside of a domain environment. In addition, purchasing a new server with ample new storage is unnecessarily costly compared to purchasing only new storage.

C. Correct: This option avoids the unnecessary expense of purchasing a new server and lets you transfer storage to the new storage array live, without taking your VMs offline.

D. Incorrect: This option will not solve your problem. If you purchase a new disk array, you need to find a way to move the VMs onto the new storage. You will be able to expand the size of the VHD files only to the point that they will use up the space on the old disks.

Thought experiment

1. Three. You need to have two highly available domain controllers even after one server is brought down. You can provide high availability for all workloads on those three servers.

2. Use an SoFS with CSVs so that the LOB application can remain connected to files even during failover.

3. You should host the LOB application in a highly available VM because the application itself isn’t cluster-aware.

4. A virtualized domain controller is not recommended in older versions of Windows Server. In addition, an SoFS is available only in Windows Server 2012 and later.