The Bottom Line - Mastering Hyper-V 2012 R2 with System Center and Windows Azure (2014)

Mastering Hyper-V 2012 R2 with System Center and Windows Azure (2014)

Appendix A. The Bottom Line

Each of The Bottom Line sections in the chapters suggests exercises to deepen skills and understanding. Sometimes there is only one possible solution, but often you are encouraged to use your skills and creativity to create something that builds on what you know and lets you explore one of many possibilities.

Chapter 1: Introduction to Virtualization and Microsoft Solutions

1. Articulate the key value propositions of virtualization. Virtualization solves the numerous pain points and limitations of physical server deployments today. Primary benefits of virtualization include consolidation of resources, which increases resource utilization and provides OS abstraction from hardware, allowing OS mobility; financial savings through less server hardware, less datacenter space, and simpler licensing; faster provisioning of environments; and additional backup and recovery options.

1. Master It How does virtualization help in service isolation in branch office situations?

2. Solution Virtualization enables the different roles required (such as domain controllers and file services) to run on different operating system instances, ensuring isolation without requiring large amounts of hardware.

2. Understand the differences in functionality between the different versions of Hyper-V. Windows Server 2008 introduced the foundational Hyper-V capabilities, and the major new features in 2008 R2 were Live Migration and Cluster Shared Volumes (CSV). Windows 2008 R2 SP1 introduced Dynamic Memory and RemoteFX. Windows Server 2012 introduced new levels of scalability and mobility with features such as Shared Nothing Live Migration, Storage Live Migration, and Hyper-V Replica in addition to new networking and storage capabilities. Windows 2012 R2 Hyper-V enhances many of the 2012 features with generation 2 virtual machines, Live Migration compression and SMB support, new Hyper-V Replica replication granularity, and Hyper-V Replica Extended replication.

1. Master It What is the largest virtual machine that can be created on Windows Server 2012 Hyper-V?

2. Solution The largest virtual machine can have 64 vCPUs with 1 TB of memory.

3. Master It What features were enabled for Linux virtual machines in Windows Server 2012 R2 Hyper-V?

4. Solution Two key features enabled for Linux in Windows Server 2012 R2 Hyper-V were Dynamic Memory and file-consistent backup.

3. Differentiate between the types of cloud service and when each type is best utilized. There are three primary types of cloud services. Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). SaaS provides a complete software solution that is entirely managed by the providing vendor, such as a hosted mail solution. PaaS provides a platform on which custom-written applications can run and should be used for new custom applications when possible because it minimizes maintenance by the client. IaaS allows virtual machines to be run on a provided service, but the entire OS and application must be managed by the client. IaaS is suitable where PaaS or SaaS cannot be used and in development/test environments.

Chapter 2: Virtual Machine Resource Fundamentals

1. Describe how the resources of a virtual machine are virtualized by the hypervisor. The hypervisor directly manages the processor and memory resources with Hyper-V. Logical processors are scheduled to satisfy computer requirements of virtual processors assigned to virtual machines. Multiple virtual processors can share the same logical processor. Virtual machines are assigned memory by the hypervisor from the memory available in the physical host. Dynamic memory allows memory to be added and removed from a virtual machine based on resource need. Other types of resources, such as network and storage, are provided by the management partition through a kernel mode memory bus known as a VMBus. This allows existing Windows drivers to be used for the wide array of storage and network devices typically used.

1. Master It How is Dynamic Memory different from Memory Overcommit?

2. Solution Dynamic Memory allocates memory in an intelligent fashion to virtual machines based on how it is being used by processes running inside the virtual machine. Memory Overcommit technologies work by telling a virtual machine that it has a large amount of memory and only allocating the memory as the virtual machine writes to the it. However, this approach does not work well with modern operating systems that try to use all memory available, even if it's only for cache purposes.

2. Correctly use processor and memory advanced configuration options. The compatibility configuration of a virtual machine processor should be used when a virtual machine may be moved between hosts with different versions of the same processor family. The processor compatibility option hides higher-level features from the guest operating system, enabling migrations without downtime to the virtual machine. Processor reserve and limit options ensure that a virtual machines coexists with other virtual machines without getting too many or too few resources. Dynamic Memory configurations allow the startup, minimum, and maximum amounts of memory for a virtual machine to be configured. It's important to note that the maximum amount of memory configured is available only if sufficient memory exists within the host.

1. Master It When should the NUMA properties of a virtual machine be modified?

2. Solution Hyper-V will configure the optimal settings for virtual machines based on the physical NUMA configuration of the hosts. However, if a virtual machine will be moved between hosts with different NUMA configurations, then the NUMA configuration of the virtual machine should be changed to match the smallest NUMA configuration of all the hosts it may be moved between.

3. Explain the difference between VHD/VHDX and pass-through storage. VHD and VHDX files are virtual hard disks that are files on a file system or share accessible to the Hyper-V host. They provide abstraction of the storage seen by the virtual machine and the underlying physical storage. Pass-through storage directly maps a virtual machine to a physical disk accessible from the host, which limits Hyper-V functionality and breaks one of the key principals of virtualization: the abstraction of the virtual machine from the physical fabric.

1. Master It Why would VHD still be used with Windows Server 2012 Hyper-V?

2. Solution VHDX is superior to VHD in every way. However, if you need backward compatibility with Windows Server 2008 R2 Hyper-V or Windows Azure IaaS (at time of this writing), then VHD should still be used.

Chapter 3: Virtual Networking

1. Architect the right network design for your Hyper-V hosts and virtual machines using the options available. There are many different networking traffic types related to a Hyper-V host, including management, virtual machine, cluster, live migration, and storage. While traditionally separate, network adapters were used with each type of traffic; a preferred approach is to create multiple vNICs in the management partition that connect to a shared virtual switch. This minimizes the number of physical NICs required while providing resiliency from a NIC failure for all workloads connected to the switch.

1. Master It Why are separate network adapters required if SMB is leveraged and the network adapters support RDMA?

2. Solution RDMA is not compatible with NIC teaming, which would have been used as the foundation for the connectivity of the virtual switch. Therefore, if RDMA needs to be leveraged for the best performance, it would be necessary to have additional network adapters that are not part of a NIC team for the RDMA optimized workloads such as Live Migration, Cluster, and SMB file traffic.

2. Identify when to use the types of NVGRE Gateway. There are three separate scenarios supported by NVGRE Gateway: S2S VPN, NAT, and Forwarder. S2S VPN should be used when a virtual network needs to communicate with another network such as a remote network. Forwarder is used when the IP scheme used in the virtual network is routable on the physical fabric, such as, for example, when the physical fabric network is expanded into the virtual network. NAT is required when the IP scheme in the virtual network is not routable on the physical network fabric and requires external connectivity, such as when tenants needed to access the Internet.

3. Leverage SCVMM 2012 R2 for many networking tasks. While Hyper-V Manager enables many networking functions to be performed, each of these configurations are limited to a single host and are hard to manage at scale. SCVMM is focused on enabling the network to be modeled at a physical level, and then the types of network required by virtual environments can be separately modeled with different classifications of connectivity defined. While the initial work may seem daunting, the long-term management and flexibility of a centralized networking environment is a huge benefit.

1. Master It Why is SCVMM required for network virtualization?

2. Solution There are three planes to network virtualization: data, control, and management. SCVMM is the management plane and also part of the control plane. For proper routing between virtual machines in a virtual network, SCVMM is vital for populating the policies to each participating Hyper-V host.

Chapter 4: Storage Configurations

1. Explain the types of storage available to a virtual machine. Windows Server 2012 R2 provides a number of different types of storage to a virtual machine. VHDX files provide a completely abstracted and self-contained virtual container for file systems available to virtual machines, and 2012 R2 allows a VHDX file connected to the SCSI bus to be shared between multiple virtual machines, providing shared storage. Additionally, storage can be exposed to virtual machines that are hosted in SAN environments through the use of iSCSI running inside the guest operating system or through the new Virtual Fibre Channel capability.

1. Master It Why is MPIO required?

2. Solution Where multiple paths are available to storage for resiliency purposes, the storage will be seen multiple times by the operating system. MPIO makes the operating system aware of the multiple paths to the storage and consolidates the storage view to one object for each storage instance.

2. Identify when to use Virtual Fibre Channel and when to use shared VHDX and the benefits of each. Virtual Fibre Channel allows virtual machines to be directly connected to a fibre channel SAN without the host requiring zoning to the storage, but it requires knowledge of the storage fabric. Shared VHDX provides shared storage to the virtual machine without requiring that the users of the shared VHDX have knowledge of the storage fabric, which is useful in hosting the type of scenarios where all aspects of the physical fabric should be hidden from the users.

3. Articulate how SMB 3.0 can be used. SMB 3.0 went through a huge upgrade in Windows Server 2012, providing an enterprise-level file-based protocol that can now be used to store Hyper-V virtual machines. This includes additional storage options for Hyper-V environments, including fronting existing SANs with a Windows Server 2012 R2 scale-out file server cluster to extend the SAN's accessibility beyond hosts with direct SAN connectivity.

1. Master It Which two SMB technologies enable virtual machines to move between nodes in a SoFS without any interruption to processing?

2. Solution SMB Transparent Failover and SMB Scale-Out enable the movement of SMB clients between servers without the need for LUNs to be moved and with no loss of handles and locks.

Chapter 5: Managing Hyper-V

1. Identify the different ways to deploy Hyper-V. Windows Server 2012 R2 Hyper-V can be deployed using a number of methods. The traditional approach is to install a server from setup media, which could be a DVD, USB device, or even files obtained over the network. Enterprise systems management solutions such as System Center Configuration Manager and Windows Deployment Services can be used to customize deployments. System Center Virtual Machine Manager can also be deployed to deploy Hyper-V hosts using Boot to VHD technology, providing a single management solution for deployment of hosts and virtual machines.

1. Master It What other types of server can SCVMM 2012 R2 deploy?

2. Solution In addition to deploying Hyper-V hosts, SCVMM 2012 R2 can deploy scale-out file servers to act as storage hosts for Hyper-V virtual machines.

2. Explain why using Server Core is beneficial to deployments. Windows Server and Windows client operating systems share a lot of common code, and a typical Windows Server deployment has a graphical interface, Internet browser, and many graphical tools. These components all take up space, require patching, and may have vulnerabilities. For many types of server roles, these graphical elements are not required. Server Core provides a minimal server footprint that is managed remotely, which means less patching and therefore fewer reboots in addition to a smaller attack surface. Because a host reboot requires all virtual machines to also be rebooted, using Server Core is a big benefit for Hyper-V environments to remove as many reboots as possible.

1. Master It What was the big change to Server Core between Windows Server 2008 R2 and Windows Server 2012?

2. Solution In Windows Server 2008 R2, the choice to use Server Core or Full Install had to be made at installation time and could not be changed. Windows Server 2012 introduced configuration levels, which allow the graphical shell and separately management tools to be added and removed at any time, requiring only a reboot to change configuration level.

3. Explain how to create and use virtual machine templates. While it is possible to manually create the virtual machine environment and install the operating system for each new virtual machine, it's inefficient considering the virtual machine uses a file for its virtual storage. A far more efficient and expedient approach is to create a generalized operating system template VHDX file, which can then be deployed to new virtual machines very quickly. A virtual machine template allows the virtual hardware configuration of a virtual machine to be configured, including OS properties such as domain join instructions, local administrator password, and more. The configuration is then linked to a template VHDX file. When the template is deployed, minimal interaction is required by the requesting user, typically just an optional name, and within minutes, the new virtual environment with a configured guest operating system is available.

Chapter 6: Maintaining a Hyper-V Environment

1. Explain how backup works in a Hyper-V environment. Windows features the VSS component that enables application-consistent backups to be taken of an operating system by calling VSS writers created by application vendors. When a backup is taken of a virtual machine at the Hyper-V host level, the VSS request is passed to the guest operating system via the backup guest service, which allows the guest OS to ensure that the disk is in a backup-ready state, allowing the virtual hard disk to be backed up at the host and be application consistent.

1. Master It Is shared VHDX backed up when you perform a VM backup at the host level?

2. Solution No. Shared VHDX, iSCSI, and fibre channel–connected storage are not backed up when performing a VM backup at the host level. To back up these types of storage, a backup within the virtual machine must be performed.

2. Understand how to best use checkpoints and where not to use them. Checkpoints, previously known as snapshots, allow a point-in-time view of a virtual machine to be captured and then applied at a later time to revert the virtual machine back to the state it was in at the time the snapshot was taken. This is useful in testing scenarios but should not be used in production because the effect of moving a virtual machine back in time can cause problems for many services. It can even cause domain membership problems if the computer's AD account password changes after the checkpoint creation.

3. Understand the benefits of service templates. Typically a virtual machine is created from a virtual machine template, which allows a single virtual machine to be deployed. A service template allows a complete, multitiered service to be designed and then deployed through a single action. Additionally, each tier can be configured to scale up and down as workloads vary, which enables additional instances of the virtual machine for a tier to be created and deleted as necessary. Deployed instances of a service template retain their relationship to the original service template, which means if the original service template is updated, the deployed instances can be refreshed and updated with the service template changes without losing application state.

Chapter 7: Failover Clustering and Migration Technologies

1. Understand the quorum model used in Windows Server 2012 R12. Windows Server 2012 R2 removes all the previous different models that were based on how votes were allocated and the type of quorum resource. In Windows Server 2012 R2, each node has a vote and a witness is always configured, but it's only used when required. Windows Server 2012 introduced dynamic quorum, which helps ensure that clusters stay running for as long as possible as nodes' votes are removed from quorum because the nodes are unavailable. Windows Server 2012 R2 added dynamic witness to change the vote of the witness resource based on if there are an odd or even number of nodes in the cluster.

2. Identify the types of mobility available with Hyper-V. Mobility focuses on the ability to move virtual machines between Hyper-V hosts. Virtual machines within a cluster can be live migrated between any node very efficiently since all nodes have access to the same storage, allowing only the memory and state to be copied between the nodes. Windows Server 2012 introduced the ability to move the storage of a virtual machine with no downtime, which when combined with Live Migration enables a Shared Nothing Live Migration capability that means a virtual machine can be moved between any two Hyper-V hosts without the need for shared storage or a cluster, with no downtime to the virtual machine.

Shared Nothing Live Migration does not remove the need for failover clustering but provides the maximum flexibility possible, enabling virtual machines to be moved between stand-alone hosts, between clusters, and between stand-alone hosts and clusters.

1. Master It Why is constrained delegation needed when using Shared Nothing Live Migration with remote management?

2. Solution Windows does not allow a server that has been given a credential to pass that credential on to another server. Constrained delegation enables credentials to be passed from a server to another specific server for defined purposes. This enables management to be performed remotely, including migration initialization.

3. Understand the best way to patch a cluster with minimal impact to workloads. All virtual machines in a cluster can run on any of the member nodes. That means before you patch and reboot a node, all virtual machines should be moved to other nodes using Live Migration, which removes any impact on the availability of the virtual machines. While the migration of virtual machines between nodes can be performed manually, Windows Server 2012 failover clustering provides Cluster Aware Updating, giving you a single-click ability to patch the entire cluster without any impact to virtual machines' availability. For pre–Windows Server 2012 clusters, SCVMM 2012 also provides an automated patching capability.

Chapter 8: Hyper-V Replica and Cloud Orchestration

1. Identify the best options to provide disaster recovery for the different services in your organization. When planning disaster recovery, an application-aware disaster recovery should be used first where possible, such as SQL AlwaysOn, Exchange DAG, Active Directory multimaster replication, and so on. If no application-aware replication and DR capability is available, another option is to look at the replication capabilities of the SAN such as synchronous replication. Additionally, replicating at the virtual machine disk level such as Hyper-V Replica provides a replication solution that has no requirements on the guest operating system or the application.

1. Master It Why is Hyper-V Recovery Manager useful?

2. Solution Hyper-V Replica provides the replication of the virtual machine but does not provide any enterprise management or failover orchestration. Hyper-V Recovery Manager provides a cloud-based portal to enable enterprise-level configuration, management, and execution of failover plans in a structured manner.

2. Describe the types of failover for Hyper-V Replica. There are three types of Hyper-V Replica failover. A test failover is performed on the replica server and creates a clone of the replica virtual machine that is disconnected from the network and allows testing of the failover process without any impact to the ongoing protection of the primary workload as replication continues. A planned failover is triggered on the primary Hyper-V host and stops the virtual machine, ensures any pending changes are replicated, starts the replica virtual machine, and reverses the replication. An unplanned failover is triggered on the replica Hyper-V host and is used when an unforeseen disaster occurs and the primary datacenter is lost. This means there may be some loss of state from the primary virtual machine. When possible, a planned failover should always be used.

1. Master It In an unplanned failover how much data could be lost?

2. Solution The Hyper-V Replica configuration specifies a time interval to perform replication, which can be 30 seconds, 5 minutes, or 15 minutes. This relates to the recovery point objective (RPO), which is the amount of data that can be lost. A replication of 15 minutes means that potentially up to 15 minutes of data could be lost, while a replication of 30 seconds means that the maximum amount of data loss should be 30 seconds, provided there is no network bottleneck that is slowing down the transmission of replica log files.

3. Explain the automated options for Hyper-V Replica failover. Hyper-V Replica has no automated failover capability. To automate the failover steps, PowerShell could be used, System Center Orchestrator could be used, or for a complete solution Hyper-V Recovery Manager could be used. The key point is the actual decision to failover should not be automatic because there could be many conditions such as a break in network connectivity that could trigger a false failover. The automation required should be the orchestration of the failover once a manual action is taken to decide a failover should occur.

Chapter 9: Implementing the Private Cloud and SCVMM

1. Explain the difference between virtualization and the private cloud. Virtualization enables multiple operating system instances to run on a single physical piece of hardware by creating multiple virtual machines that can share the resources of the physical server. This enables greater utilization of a server's resource, reduction in server hardware, and potential improvements to provisioning processes. The private cloud is fundamentally a management solution that builds on virtualization but brings additional capabilities by interfacing with the entire fabric including network and storage to provide a complete abstraction and therefore management of the entire infrastructure. This allows a greater utilization of all available resources, which leads to greater scalability. Because of the abstraction of the actual fabric, it is possible to enable user self-service based on their assignment to various clouds.

1. Master It Do you need to change your fabric to implement the private cloud?

2. Solution Typically not. Providing your storage supports SMI-S to enable it to be communicated to and from SCVMM and your compute and network resources meet your needs in terms of your desired levels of capability, there should be no need to change the actual fabric. Only the management will change.

2. Describe the must-have components to create a Microsoft private cloud. The foundation of a Microsoft private cloud solution would be virtualization hosts using Hyper-V and then SCVMM and App Controller to provide the core fabric management, abstraction, cloud creation, and end-user self-service functionality. Orchestrator and Service Manager can be utilized to build on this core set of private cloud functionality to bring more advanced workflows, authorization of requests, and charge-back functionality.

Chapter 10: Remote Desktop Services

1. Explain the types of desktop virtualization provided by RDS. Windows Server 2012 R2 provides two main types of desktop virtualization: session-based desktops and VDI-based desktops. There are two types of VDI deployments: pooled and personal.

1. Master It When should VDI be used over session-based virtualization?

2. Solution The primary difference between session-based virtualization and VDI desktops is one of isolation. If there are particular users who require a high level of isolation from other users such as needing to customize the operating system or reboot it, then VDI is a good fit. For other users such as task-based ones who are more locked down, then session-based virtualization is a good solution.

2. Describe the benefits of RemoteFX and its requirements. RemoteFX brings a number of technologies such as USB port-level redirection and improved codecs that with Windows Server 2012 are available separately from GPU virtualization, which is the other primary RemoteFX technology that allows a physical GPU to be virtualized and assigned to VDI virtual machines running client operating systems. Using RemoteFX vGPU enables virtual machines to have local graphical resources, which enables the ability to run rich graphical applications, specifically those that leverage DirectX. To use RemoteFX vGPU, the graphics card must support DirectX 11 or newer and have a WDDM 1.2 driver or newer. The processor must also support SLAT.

1. Master It Is RemoteFX vGPU a good solution for OpenGL applications?

2. Solution No. OpenGL 1.1 is supported using CPU only and does not utilize the vGPU. RemoteFX vGPU is targeted at DirectX applications.

3. Articulate the other technologies required for a complete virtualized desktop solution. The complete user experience comprises a number of layers. The operating system provided by VDI or session virtualization is just the foundation for the user experience. The users need access to their profiles, their data, and their applications. To provide data access, the most common technology is folder redirection. For a user's profile, while historically roaming profiles were used, a better and more granular solution is UE-V, which provides application-level setting replication. For the applications, technologies such as App-V and RemoteApp can be leveraged, while specific core applications could be installed on the RD Session Host or VDI virtual machine template.

1. Master It Why is it best to minimize the number of applications installed in the VM VDI template image?

2. Solution Every application installed in a reference image will eventually need to be updated, which is additional maintenance on the template. This is not a simple process because any change will require running Sysprep again, which has its own complexities. Additionally, the more applications installed in the template, the bigger the template and the more resources consumed that would be wasted unless the application is used by every single user. With App-V and RemoteApp, there are better ways to enable applications in the environment.

Chapter 11: Windows Azure IaaS and Storage

1. Explain the difference between Platform as a Service and Infrastructure as a Service. The key difference relates to who is responsible for which elements of the solution. With Platform as a Service, solutions are written for a supplied platform within certain guidelines. The platform then ensures availability and protection for the application, and there is no operating system or fabric management required. The key point is that the application must be written to work with the PaaS platform. With Infrastructure as a Service, a virtual machine is provided, which means the provider manages the compute, storage, and network fabric but the user of the virtual machine is responsible for the operating system and everything within it and also patching it. The benefit of IaaS is that you have complete access to the operating system, so normal applications can run in IaaS without requiring customization. A key principal of IaaS is that you should not have to modify the application to work on it.

1. Master It What is Software as a Service?

2. Solution Software as a Service requires no infrastructure management from the user of the service because a complete, maintained solution is provided that is accessible, typically over the Internet. The only administration relates to basic configuration and administration of users of the service. A good example of SaaS is Office 365, which is Microsoft's Exchange-, Lync-, and SharePoint-based service in the cloud.

2. Connect Windows Azure to your on-premises network. To create connectivity between Windows Azure and your local network, there are a number of requirements. First, virtual networks need to be defined in Windows Azure in affinity groups. Virtual machines are created and configured at the time of creation to use a specific subnet in the virtual network. A site-to-site gateway is created between Windows Azure and your on-premises network, which permits seamless connectivity.

1. Master It Can Windows Server 2012 RRAS be used on the local premises side of the VPN gateway?

2. Solution Yes. Windows Server 2012 RRAS can be used for the on-premises side of the VPN connection and the Windows Azure management portal will generate the full configuration script required to enable automatic configuration.

3. Move data between on-premises and Windows Azure. Windows Azure is built on Windows Server Hyper-V and specifically leverages the VHD format currently. A virtual machine that uses VHD can be copied to Windows Azure storage and used with a new Windows Azure virtual machine or added to an existing virtual machine. Similarly, VHD files used in Windows Azure virtual machines can be downloaded to on-premises and used with Hyper-V virtual machines.

1. Master It What PowerShell cmdlets are used to copy VHDs to and from Windows Azure?

2. SolutionAdd-AzureVhd and Save-AzureVhd are used.

3. Master It Can dynamic VHDs be used in Windows Azure?

4. Solution No. All VHDs must be fixed, and the Add-AzureVhd cmdlet converts dynamic VHDs to fixed VHDs during the upload. However, Windows Azure Storage stores files sparsely, which means only blocks that have data written to them are actually stored and therefore billed.

Chapter 12: Bringing It All Together with a Best-of-Breed Cloud Solution

1. Identify the overall best architecture for your organization. As this chapter has shown, there are a lot of things to consider when choosing a cloud-based solution for an organization. It's important to take the time to understand the organization's strategic direction, its resources, and the needs of its workloads. Only then can an architecture be created that utilizes the strengths of the different options.

1. Master It What is the most important first step in deciding on the best architecture?

2. Solution Have a clear direction for your IT organization. Is it cloud first? Is it to focus on best-in-class datacenters? This will guide the architecture design and final solution.

Chapter 13: The Hyper-V Decoder Ring for the VMware Administrator

1. Understand how System Center compares to vSphere solutions. System Center comprises a number of components that each provide different functionalities. vSphere has a similar set of solutions, and a basic mapping is shown here:

· SC Virtual Machine Manager = vCenter Server

· SC Data Protection Manager = vSphere Data Protection Advanced

· SC Operations Manager = vCenter Ops Mgmt Suite

· SC Orchestrator = vCenter Orchestrator

· SC Service Manager = vCloud Automation Center

· App Controller and SC Service Manager = vCloud Director

2. The levels of functionality are not exactly the same, but the mapping shows the key functional area mapping of the products.

1. Master It How is System Center licensed?

2. Solution System Center is a single product that is available in two versions, Standard and Datacenter. Both versions are functionally identical, with the only difference being the number of virtual OS instances that are managed as part of the license. Standard provides licensing for two virtual OS instances, while Datacenter provides licensing for an unlimited number of virtual OS instances. Both Standard and Datacenter are licensed by physical processor, and each license covers two processors.

3. Convert a VMware virtual machine to Hyper-V. There are two aspects to converting a virtual machine: there is the virtual machine configuration, such as the number of vCPUs, memory, network connectivity, and so on; and then there are the virtual hard disks that contain the operating system and data. There are various solutions to address both parts of the conversion. The primary Microsoft VMware to Hyper-V tool is Microsoft Virtual Machine Converter, which performs migrations in an interactive fashion. Command-line tools are also available.

0. Master It How can MVMC be used as part of a bulk conversion process?

1. Solution The Microsoft Automation Toolkit (MAT) available at http://gallery.technet.microsoft.com/Automation-Toolkit-for-d0822a53 utilizes MVMC but adds discovery and automation to provide a larger-scale conversion solution.

2. Master It How does NetApp help with VMDK to VHD conversions?

3. Solution NetApp has a native VMDK to VHD conversion capability that rewrites the header of the virtual hard disk instead of having to perform a full conversion of the actual virtual hard disk data. This enables conversions in seconds instead of minutes.