Mastering Hyper-V 2012 R2 with System Center and Windows Azure (2014)
Chapter 13. The Hyper-V Decoder Ring for the VMware Administrator
This chapter offers a quick decoder ring for readers with VMware experience. Throughout this book I have covered Hyper-V and System Center; in this final chapter, I explain what the Hyper-V and System Center equivalents are for the VMware technologies that you may be familiar with. I also address some common misconceptions I have seen when comparing VMware to Hyper-V. My goal for this chapter is not to point out flaws in VMware; instead, I will show how Hyper-V technologies map to features you may already be using in VMware.
In this chapter, you will learn to
· Understand how System Center compares to vSphere solutions
· Convert a VMware virtual machine to Hyper-V
Overview of the VMware Solution and Key Differences from Hyper-V
The VMware hypervisor is ESXi, which (like Hyper-V) is a type 1 hypervisor that runs directly on the hardware. Unlike ESX, ESXi does not use the ESX Service Console, which was a Linux environment used as part of the boot process and for management purposes. ESXi has a small footprint, less than 200 MB, and is a monolithic hypervisor, meaning all drivers are specifically written to be contained and used within the hypervisor.
You manage a VMware environment through the VMware vSphere Client. The client can connect directly to an ESXi host or to a VMware vCenter Server. Connecting to a vCenter server provides a higher level of management capability and enables more features of vSphere. A web client is also available for browser-based management.
While the ESXi hypervisor is a free download (like the free full-featured Microsoft Hyper-V Server), to enable the majority of features, such as migration and high availability, you need to buy a license for one of the available vSphere editions. They are licensed in one-processor increments. VMware has a detailed comparison between the different versions at www.vmware.com/products/vsphere/compare. The following are the key features enabled in the three main versions:
1. Standard vMotion; Storage vMotion; HA; Fault Tolerance; hot-add of CPUs, memory, and devices; vSphere replication; data protection; and vShield Endpoint
2. Enterprise Reliable memory, Hadoop support, Virtual Serial Port Concentrator, DRS and DPM, storage APIs for array integration, and multipathing
3. Enterprise Plus Storage DRS, storage policies, Storage and Network I/O Control, SR-IOV, flash read cache, distributed switch, host profiles, and autodeploy
For details of the number of virtual processors for a virtual machine, see http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2001113. This document shows the following (based on vSphere 5.5):
· Standard: 8 vCPUs per VM
· Enterprise: 32 vCPUs per VM
· Enterprise Plus: 64 vCPUs per VM
Many VMware enterprises struggle to convert this type of licensing to Hyper-V licensing. Remember, with Hyper-V, it is a role of Windows Server, and the Hyper-V capabilities are the same if you use Windows Server Standard, Windows Server Datacenter, or even the free Microsoft Hyper-V Server. The only difference between the versions is the number of virtual Windows Server operating system instance rights that are included, shown here:
· Microsoft Hyper-V Server: 0
· Windows Server Standard: 2
· Windows Server Datacenter: Unlimited
The number of virtual machines you want to run that will use the Windows Server operating system will guide the version of Windows Server you use. Obviously, VMware does not include license rights for Windows Server guest operating systems, and when using VMware, you will still license the Windows Server operating system where needed. This is why Hyper-V is considered “free.” You likely already own Windows Server to license the virtual machines running on VMware. If you don't run Windows Server virtual machines, then you can use the free Microsoft Hyper-V Server. Windows Server licenses also cover two physical processors.
To achieve a similar level of management and to enable a small number of comparative features such as VMware's DRS, you must also use System Center. System Center is licensed the same as Windows Server, as either Standard or Datacenter. Once again, Standard includes two virtual OS instances, while Datacenter covers an unlimited number. The difference with System Center is that if the VM is being managed by System Center, it needs to be covered even if it is not running Windows Server OS. For example, Linux VMs being managed by System Center need to be licensed for System Center.
A vSphere vCenter Server is deployed to provide the enterprise management of ESXi and to enable most of the features such as vMotion and DRS. vCenter Server can be installed on Windows Server or is available as a Linux-based virtual appliance. The same vSphere client is used to connect to a vCenter Server, and the vCenter Server provides centralized management of all the ESXi hosts in the environment. A number of plug-ins are available for vCenter Server; an example is the vSphere Update Manager, which is used to patch the vCenter instance and the ESXi hosts, as well as selected virtual machines.
In recent years VMware has purchased a number of companies and has developed technologies to offer a richer management solution, enabling VMware to move beyond just the hypervisor. Currently there are a number of ways to purchase these solutions. VMware has grouped the technologies into VMware vCloud Suite, which can be thought of as the VMware version of System Center in addition to other capabilities. The suite enables, among other things, private cloud capabilities. Some key products include the following:
1. vCenter Orchestrator Automation technologies and workflow capabilities
2. vCloud Automation Center Enables self-service, a service delivery, and the ability to unify cloud management across different vendor clouds
3. vCenter Operations Management Suite Monitoring and insight into the infrastructure
4. vCloud Director Creation of virtual datacenters including computer, storage, and networking
5. cSphere Data Protection Advanced Backup and recovery solution
There is a lot of similarity between the various VMware and Microsoft products and components. Table 13.1 maps the key areas of functionality. There are differences in exact features available, but that's not something I cover in this chapter; instead, this table highlights the corresponding Microsoft solution for those familiar with VMware.
Table 13.1 Technology solutions for Microsoft and VMware
Technology |
Microsoft |
VMware |
Hypervisor |
Hyper-V |
ESXi |
VM management |
System Center Virtual Machine Manager |
vCenter Server |
Backup and protection |
System Center Data Protection Manager |
vSphere Data Protection Advanced |
Monitoring |
System Center Operations Manager |
vCenter Operations Management Suite |
Automation |
System Center Orchestrator |
vCenter Orchestrator |
Service manager |
System Center Service Manager |
vCloud Automation Center |
Self-service |
App Controller and System Center Service Manager |
vCloud Director |
In the next section, I cover the Hyper-V equivalents of the most popular VMware technologies and more importantly how to achieve the desired functionality. Often I see organizations hung up on “Where is VMware feature X in Hyper-V?” when what they really want to know is “How do I achieve this capability in Hyper-V?” Different products work in different ways; that's what gives products advantages and disadvantages compared to each other. What is important is being able to meet the needs of an organization in the way that is most efficient for a product rather than trying to force a product to behave like another.
Translating Key VMware Technologies and Actions to Hyper-V
I've seen my fair share of requests for comment (RFCs) and requests for proposal (RFPs) from organizations looking to evaluate their virtualization needs or private cloud needs. Sometimes they have been a fair, open exercise with the focus being on how to achieve a certain requirement. Other times they looked like they were written by one of the vendors and were completely skewed to “How do you do vendor feature X?” As I've worked with a number of organizations that were evaluating or implementing Hyper-V after using VMware for many years, I've found a huge level of misunderstanding surrounding using Hyper-V in Windows Server 2008 or about Virtual Server. If you have read the rest of this book, you will have great insight into the range of capabilities of Hyper-V; in this section, I highlight the most common VMware features and also address some of the biggest misunderstandings I've seen.
Translations
Table 13.2 shows some of the most common maximums related to hosts and virtual machines. The VMware information is based on www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf. vSphere 5.5 increased a number of key maximums from 5.1 to match the Hyper-V values.
Table 13.2 VMware and Hyper-V Maximums
Item |
VMware |
Hyper-V |
Logical CPUs per host |
320 |
320 (640 without Hyper-V) |
vCPUs per host |
4096 |
2048 |
Memory per host |
4 TB |
4 TB |
VMs per host |
512 |
1024 |
Nodes per cluster |
32 |
64 |
VMs per cluster |
4000 |
8000 |
vCPUs per VM |
64 |
64 |
RAM per VM |
1 TB |
1 TB |
Virtual SCSI adapters per VM |
4 |
4 |
Attached disks per virtual SCSI adapter |
15 |
64 |
Virtual network adapters per VM |
10 |
8 synthetic (plus 4 legacy in generation-one VM) |
USB devices per VM |
20 |
USB not supported; use USB over IP solutions if required or map USB drives as part of an RDP connection to a VM |
Maximum virtual hard disk |
62 TB |
64 TB |
Number of snapshots (VMware)/checkpoints (Hyper-V) |
32 (although 2 to 3 recommended per VM; see http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1025279) |
50 |
As you can see, the maximums are similar, and both hypervisors can scale to high levels to run pretty much any workload. Next I cover specific technologies and how they compare.
Hot-Add of Resources
Hyper-V supports the hot-add of storage to the SCSI controller but not to the IDE controller. In generation-two virtual machines, the only type of controller is SCSI, and for generation-one virtual machines, the SCSI controller should be used for data disks, with the IDE controller used for the OS disk only.
If a virtual machine is configured to have dynamic memory, then memory is added to and removed from the virtual machine as needed. The maximum memory for a virtual machine can be increased while the virtual machine is running. If a virtual machine is configured to use static memory, then its memory cannot be increased.
The hot-add of processors is not supported by Hyper-V; however, because Hyper-V and ESX differ in how processors are scheduled, there is no downside to allocating more vCPUs than you determine you need to Hyper-V virtual machines. In other words, you can safely over-allocate vCPUs to a VM.
To restrict the number of CPUs that can be used by a virtual machine, the virtual machine limit on the processor (as shown in Figure 13.1) could be set to a lower percentage. Then, if the virtual machine needs more processor resources, the virtual machine limit value can be increased while the virtual machine is running. For example, I may give a virtual machine four vCPUs but set the virtual machine limit to 50 percent, which effectively means the virtual machine can use two vCPUs' worth of resource. At a later time while the virtual machine is running, I could change the limit to 75 percent or 100 percent to increase its processor resources. This approach is beneficial to the majority of applications because most applications don't actually support the hot-add of processors.
Figure 13.1 Setting the virtual machine limit for processors
File-Level Storage Usage
VMware can leverage NFS servers to store virtual machines. For Hyper-V SMB 3, storage can be leveraged such as from a Windows file server running Windows Server 2012 or newer or a SAN/NAS that supports SMB 3.
Virtual Hard Disk Format
While VMware uses VMDK, Hyper-V uses the VHD and VHDX formats for its virtual hard disks. VHDX also supports sharing a VHDX between multiple virtual machines, which is seen as shared storage by the virtual machines and can be used as clustered shared storage.
VMware supports thick and thin VMDK files. The Hyper-V equivalents are dynamic VHDX for a thin VMDK, which allocates space as data is written, and fixed VHDX for thick VMDK, where all space is preallocated. Parent–child relationships are also supported using the differencing-type VHDX.
Both Hyper-V and VMware support mapping raw storage directly to a virtual machine.
Networking Features
Both VMware and Hyper-V support a range of networking capabilities such as VLANs, PVLANs, IPv6, Jumbo Frames, offload support, network QoS, NIC teaming, and so on. VMware has VMware NSX for software-defined networking (SDN), while Hyper-V uses Hyper-V Network Virtualization.
Hyper-V supports the Cisco Nexus 1000V in addition to many other extensions for the Hyper-V Extensible switch.
Distributed Switch
To provide centralized management and configuration for virtual switches in Hyper-V, System Center Virtual Machine Manager network fabric management is used. Using a combination of logical networks, port profiles, and logical switches, a centralized network architecture is created, and connectivity is centrally managed and deployed to groups of Hyper-V hosts.
vMotion and Storage vMotion
Hyper-V uses Live Migration (think vMotion) to provide zero-downtime migration of virtual machines between nodes in a cluster. Hyper-V also has Storage Migration (think Storage vMotion) to move storage with no impact to the availability of virtual machines. Windows Server 2012 introduced shared-nothing live migration; this enables virtual machines to be moved between hosts without any common infrastructure such as storage or cluster membership. There is no VMware equivalent. Windows Server 2012 supports multiple simultaneous live migrations with no fixed limit but rather dynamically adjusts the concurrent number based on available bandwidth and the optimal number. Windows Server 2012 R2 features compression of live migration data in addition to leveraging RDMA if available on the network adapter.
VMware leverages Enhanced vMotion Compatibility (EVC) to enable vMotion between hosts with different versions of a processor within a processor family. Hyper-V has a similar feature, processor compatibility mode, that enables the same mobility between different versions of processors within a family. Note you cannot live migrate between Intel and AMD processors.
Distributed Resource Scheduling and Distributed Power Management
Distributed Resource Scheduling (DRS) provides the automatic vMotion of virtual machines based on processor, memory, and storage utilization to better balance workloads across hosts. SCVMM has this same capability, Dynamic Optimization, which balances workloads across hosts based on defined criteria related to CPU, memory, disk, and network, as shown in Figure 13.2, and moves the virtual machines using Live Migration.
Figure 13.2 Configuring dynamic optimization on a host group in SCVMM
Distributed Power Management (DPM) consolidates virtual machines on hosts to minimize power utilization and enable hosts to be powered down in quiet times. SCVMM has the same feature, called Power Optimization.
Note that this is one of the few features that requires SCVMM instead of being native to Hyper-V.
Update Manager
Update Manager provides the patching of the vSphere environment with some limited guest patching. Hyper-V is a role of Windows Server, which means existing Windows patching solutions can be used such as Windows Server Update Services and System Center Configuration Manager. Additionally, solutions exist to patch an entire Windows Server cluster with one click with no impact to the clustered resources, including virtual machines, which are automatically live migrated between hosts. SCVMM and Failover Clustering both provide this one-click patching capability.
High Availability
Hyper-V leverages the Failover Clustering feature of Windows for its high availability and automatic restart, which is similar to the VMware clustering feature.
Failover Clustering is a proven technology, and while it had complexities related to its setup and maintenance prior to Windows Server 2008, with Windows Server 2008 the management side of Failover Clustering was rewritten. Now it provides a simple setup procedure requiring minimal information, and the maintenance is far more intuitive. Additionally, the old quorum disk is no longer required. Instead, a file share or disk witness is configured that is used only if required based on the current number of active nodes in the cluster.
Fault Tolerance
Fault Tolerance is a VMware feature that enables a second instance of a VM to run on another VMware host and keeps them synchronized using a processor lockstep process. This requires low-latency network connections, and virtual machines are limited to a single vCPU. This feature is not available in Hyper-V. This type of fault tolerance is better implemented in the application than the hypervisor. It is possible to achieve this type of Fault Tolerance with Hyper-V with third-party software on specific hardware, but it's still better to let the application provide this type of availability.
vSphere Replication and Site Recovery Manager
vSphere Replication provides replication of a virtual machine to another host, and Site Recovery Manager (SRM) provides an enterprise failover experience, which is used by a number of VMware customers.
Hyper-V Replica provides an asynchronous replication of virtual machines including the ability to maintain checkpoints of previous points in time with optional VSS integration. Hyper-V Replica supports planned, unplanned, and test failovers with the option of alternate IP injection as part of the failover process.
For an orchestrated complete failover with additional actions such as running scripts, Hyper-V Recovery Manager (HRM) can be used, which is a Windows Azure service that works by communicating to SCVMM instances in each datacenter.
Memory Overcommit and Transparent Page Sharing
VMware memory allocation for virtual machines works by having a single amount of memory defined for a virtual machine, and then the actual physical RAM is allocated as the virtual machine tries to write to the memory. This enables an “overcommit” of memory allocation by allowing, for example, four virtual machines to be configured with 4 GB of memory each, and all could start on a host with 8 GB of memory. This is assuming that each virtual machine would not try to write to all its configured memory; this assumption is flawed for modern operating systems such as Windows Server 2008, which will use all available memory if possible, even if it is for caching purposes. The logic is, why have empty memory if it could bring some benefit? Therefore, in most production environments, a modern operating system's memory overcommit is not widely used or even recommended to be used aggressively. And, if physical memory is not available, disk-based paging may be used, which results in performance degradation.
Hyper-V uses dynamic memory that uses start, minimum, and maximum values defined on the virtual machine and monitors the actual memory used by the operating system within the virtual machine. Additional memory is added to the virtual machine if needed by processes. Hyper-V, like VMware, uses ballooning to reclaim memory.
Transparent page sharing works by looking for duplicate 4 KB memory pages and, if found, the page is stored only once in memory. This can work well for hosts with many of the same OS versions running in virtual machines. The problem is that modern operating systems (Windows Server 2008 and modern Linux) use large memory pages that are 2 MB in size. The chances of finding duplicate 2 MB pages is virtually nil, and there are some other negatives (as discussed previously in this book). Therefore, TPS is not used on large memory pages by VMware. Because of the limited usefulness of TPS on modern operating systems, Hyper-V does not implement a memory deduplication feature.
Most Common Misconceptions
The following are some common misconceptions:
1. Hyper-V runs on Windows Server and is a type 2 hypervisor.
Hyper-V is a type 1 hypervisor and runs directly on the bare-metal hardware. It requires the processor to support hardware-assisted virtualization or Ring −1. The confusion stems from the sequence of actions to install Hyper-V:
1. 1. Install Windows Server.
2. 2. Enable the Hyper-V role.
3. 3. Manage it from Windows Server.
It is quite reasonable to assume Hyper-V is running on top of Windows Server and therefore is a type 2 hypervisor. The reality is when the Hyper-V role is enabled, the boot manager for Windows is modified to load the Hyper-V hypervisor first, and then the existing Windows Server operating system runs on top of Hyper-V. The Windows Server operating system acts as a management partition for Hyper-V, enabling management of the operating system and also handling resource access for networking and storage.
2. Hyper-V requires far more patching than ESXi. This is a trickier one, and I've seen numbers from both sides. The reality is ESXi requires patching, as does Hyper-V and the Windows Server management partition Hyper-V relies on. By using the Server Core installation of Windows Server for the management partition, the number of fixes required is greatly reduced, and therefore the frequency of reboots is reduced as well. I don't think the difference is as great as we are commonly led to believe, though, and I've seen numbers that show ESXi actually had more patches than were necessary on Hyper-V. The key point, however, is that by using SCVMM or leveraging the Failover Clustering feature's one-click patching, there is no actual administrative work to patch an entire cluster, and there is no downtime to the virtual machines, which are seamlessly moved between hosts during host patching. This means even when you do require patching and even if a reboot is required, it's not any work for you and doesn't impact the virtual machines.
3. Hyper-V is a security risk because of the install size.
Size is not a good indication of security risk. The smallest paper bag is not more secure than the largest vaults made of steel. What's important is the design, testing, and processes for a solution. Strictly speaking, the Hyper-V hypervisor is actually smaller than ESXi, but Hyper-V relies on the Windows Server management partition for certain resource access and virtual machine management, which has a far larger disk footprint; however, using Server Core reduces the number of applicable patches and reduces the attack surface.
It's important to realize that Windows Server is used to run most of the application workloads used in organizations anyway, which means it's already a trusted platform from a security perspective, has great technologies built in to mitigate security risks, and is one of, if not the, most tested operating system in the world with huge numbers of resources ensuring its integrity. Windows is certified to the highest security standards and is used by governments, militaries, and the largest companies in the world.
4. VMware supports more operating systems than Hyper-V.
This one is a fact. VMware supports more operating systems than Hyper-V; see http://technet.microsoft.com/en-us/library/hh831531.aspx for the list of operating systems supported on Hyper-V. It is a comprehensive list of the supported versions of Windows Server, Windows clients, and Linux distributions.
VMware supports far more operating systems, but the truth is that it supports operating systems that OS vendors no longer support. For example, VMware supports Windows Server 2000, which isn't supported by Microsoft anymore. In fact, VMware only just stopped MS-DOS and Windows NT support with ESXi 5.5 (http://blogs.vmware.com/guestosguide/2013/09/terminated-os-releases.html).
Microsoft takes a stricter approach to the operating systems supported with Hyper-V; it supports only those operating systems that are supported by the vendor as well. Other operating systems will work on Hyper-V. In fact, even Windows Server 2012 R2 Hyper-V still has the processor compatibility flag that enables Windows NT to run on Hyper-V, but Microsoft does not support the operating system on Hyper-V since the Windows NT operating system itself is no longer supported. I therefore think this is a nonissue because both hypervisors support modern operating systems and those that are still supported by the vendor.
From a Windows perspective, Hyper-V has better and more integrated support for newer operating system versions, and even new Linux distributions have the Hyper-V integration services built-in.
5. VMware can host more VMs than Hyper-V. This was true with older operating systems such as Windows Server 2003 because the allocate-on-first write VMware approach worked well for these operating systems and transparent page sharing worked because older operating systems used small memory pages. With Windows Server 2008 and modern Linux operating systems utilizing available memory for caching and other optimization purposes, the VMware memory optimizations are no longer as effective, and the Hyper-V dynamic memory feature actually yields a higher density of virtual machines because the memory allocation is based on the memory used by processes running within the VM rather than a static amount. Dynamic memory also works with Linux virtual machines when using the latest supported Linux distributions.
6. Hyper-V works well only for Windows guests. Hyper-V works great for Windows virtual machines, but it, as of R2, offers almost the same set of features for virtual machines running Linux. Microsoft was actually one of the top 20 contributors to the Linux kernel in 2012, and the Hyper-V integration services are built in to the latest Linux distributions. Microsoft has a long list of supported Linux guests, as documented at http://technet.microsoft.com/en-us/library/hh831531.aspx. The following are supported with Linux guests: 64 vCPUs, 1 TB of RAM, live migration, Hyper-V Replica including IP injection during failover, and pretty much everything else. The only features I am aware of that currently do not work with Linux are Virtual Fibre Channel, RemoteFX, SR-IOV, vRSS, and generation-two virtual machines. Microsoft considers Linux a first-class operating system for Hyper-V and is looking to ensure feature parity for Windows and Linux guests as much as possible, making Hyper-V a great virtualization solution for Windows and Linux.
7. Hyper-V does not have a clustered file system. Windows has the well-established NTFS file system that is industry proven and is well understood by most IT groups, and it has a huge set of partner tools that work with it. Microsoft has leveraged NTFS as the foundation for shared disk cluster purposes by enhancing it with Cluster Shared Volumes to create CSVFS; this enables NTFS volumes to be simultaneously used by all the nodes in the cluster. Because CSVFS is built on NTFS, the existing disk tools work without requiring modification.
8. The application is supported only on VMware.
Server virtualization provides a virtualized set of hardware for the operating system that is installed in the virtual machine. The actual application running in the virtual machine should perform the same way if it's running on physical hardware in a Hyper-V virtual machine or a VMware virtual machine. However, the application vendors do need to bless the various virtualization solutions to ensure they will support your deployment. In my experience, there are few application vendors that support VMware but don't support Hyper-V. For example, Oracle now supports its solutions on Hyper-V in addition to its own hypervisor. Check with the vendors of your applications to be 100 percent clear on their support policies.
If you have virtual appliances that are provided as OVF format, there is an import tool available (documented at http://technet.microsoft.com/en-us/library/jj158932.aspx) that will allow their conversion and import into Hyper-V.
9. Hyper-V is not a proven platform. Hyper-V is now in its fourth major version. If you look at the Gartner magic quadrant for x86 server virtualization, Microsoft is one of only two vendors in the leaders quadrant (along with VMware). Hyper-V is used by many of the Fortune 500 organizations and powers some of the largest services in the world, including Windows Azure.
10.VMware runs great, so why change it? Actually this one I agree with. If VMware is working exactly how you need it to work, can be managed the way you want, and works for your organization financially, then changing to Hyper-V does not make sense. In reality, however, many organizations are looking to save IT dollars by using Hyper-V and System Center because most already own them. In addition, organizations are looking to use a single management platform across operating systems, hypervisors, and the actual fabric, and organizations want compatibility with Windows Azure and other cloud solutions. There are many more reasons, but the key point is the shift to Hyper-V should be because of the benefits your organization will get from the switch.
11.The migration from VMware to Hyper-V is too risky. This is a valid concern, but later in this chapter I will talk about the tools that provide an automated conversion of VMware virtual machines to Hyper-V, namely, the Microsoft Virtual Machine Converter Solution Accelerator (MVMC) and the Migration Automation Toolkit (MAT). The tool you use is one part of the complete migration; equally important are the discovery, planning, and preparation phases, which is why organizations often bring in a consulting organization to assist with at least some of the migration.
Converting VMware Skills to Hyper-V and System Center
If you have administered only VMware and not Windows Server, then the move to Hyper-V and System Center will seem intimidating because you also need to be familiar with Windows Server. If you have administered Windows Server, then Hyper-V will be fairly intuitive, and all the components of System Center share a common user interface. Still, there is a lot to learn.
You should feel confident that your knowledge of VMware equates to many virtualization best practices and concepts that translate to Hyper-V and System Center functionality, and this book is a great step in becoming a master of Hyper-V and in aspects of System Center and Windows Azure. There are many other resources available, especially from Microsoft. Microsoft has made a big effort to help VMware administrators learn the Microsoft technologies and has even created some virtualization-specific certifications to help show your Microsoft virtualization skills.
Microsoft has an online training class in its virtual academy focused on Hyper-V and System Center (http://aka.ms/SvrVirt). At the time of this writing, once you complete the class, a free exam certification token is available to take exam 70–409, which gives you the Microsoft Certified Specialist: Server Virtualization with Hyper-V and System Center certification if you pass. There are also lots of other training classes available at the online academy and also great videos showing details of technologies. Microsoft also runs an annual event, TechEd, and now many of the sessions are streamed over the Internet live and available after the event for everyone to watch. This is a great way to get the latest information from the people at Microsoft who create the technologies.http://channel9.msdn.com/Events/TechEd/ is a good starting point to find the TechEd recordings.
To take certifications further, there are five exams that, once you've passed them all, give you the Microsoft Certified Solutions Expert Private Cloud certification, which is detailed at www.microsoft.com/learning/en-us/private-cloud-certification.aspx. The Private Cloud certification tests all aspects of System Center including details about monitoring and protection, which will require learning outside of that covered in this book. There are classroom training classes for each of the exams.
Personally I've never taken any classroom training related to technologies and prefer to learn by installing the products in a lab, trying different implementations, trying to solve problems and fixing things when they don't work (which is when I find you learn the most). The best way to learn Hyper-V and System Center and convert your VMware skills is to get an environment running with Hyper-V and System Center and start implementing solutions that match what you are doing in VMware today. Look for new ways to do things; this chapter outlines some of those key “mappings” of technologies. For every technology you read about, such as those in this book, try them in your environment. Every action you perform using the graphical interfaces, try doing with PowerShell. If you are using Hyper-V Manager, find where the feature is in SCVMM. This will give you the best understanding of the complete Microsoft solution.
Migrating from VMware to Hyper-V
I want to be very clear on this: I never tell people to migrate from VMware to Hyper-V without a reason. VMware ESXi is a great hypervisor, it has a proven track record, and it offers great functionality with vSphere vCenter. Hyper-V is also a great hypervisor. It's now in its fourth major version, and the fact that it powers some of the largest services in the world such as Windows Azure and is used by the largest companies in the world shows it has a proven track record and great functionality that gets even better with System Center. There needs to be a reason to migrate; these are ones I commonly hear from customers:
· Money. You're already paying for Windows Server, so Hyper-V is effectively free.
· Your organization is using System Center and is largely Microsoft-based, so you want unified management.
· You want to use some piece of functionality that would be an additional license if using VMware.
· You're using Windows Azure and want compatibility between on- and off-premise services.
It's important for organizations to minimize the transitory risk when performing a virtualization migration. While the end state of running on Hyper-V may be trusted, the actual process of migrating virtual machines and having a period of critical services spread over multiple hypervisors can be concerning to IT organizations. The best way to manage this risk is careful planning with fallback processes in place and a good testing plan to ensure no surprises.
When planning migrations, migrate the least important systems first, fine-tune the migration process, iron out any resource access problems, and perfect the processes to identify all dependent resources before moving on to the more important and visible systems.
Make sure the IT team, the help desk, and the support staff are properly trained in the new environment, and update any processes as systems are migrated. Are changes in the DR process required? Has the backup and restore process changed? Are the monitoring systems monitoring the correct systems? How are the virtual machines provisioned? Are the Hyper-V hosts part of a patch process?
Unless your organization is in the habit of performing this type of migration, most likely the best approach is to hire an outside consulting organization to assist for at least the discovery, planning, and pilot, if not for the entire migration project. Using a consulting organization that does this type of migration every day will help you avoid the inevitable mistakes that will occur when trying this for the first time yourself.
The actual conversion of the VMware virtual machine and primarily the VMDK to a VHDX to be usable by Hyper-V is a known process, and there are a number of solutions available, the foremost being the Microsoft Virtual Machine Converter Solution Accelerator (MVMC), available at www.microsoft.com/en-us/download/details.aspx?id=34591. I discussed this and other solutions in Chapter 5. For a slightly tongue-in-cheek view of the VMware to Hyper-V migration using the MAT and a NetApp storage array to expedite the VMDK to VHD conversion, see http://aka.ms/mat4shift. NetApp has a native VMDK to VHD conversion that rewrites only the header of the virtual hard disk, resulting in conversions in seconds instead of minutes. The solution outlined in the video uses a temporary NetApp storage array at the organization just for the conversion process. For example, here are the steps:
1. Use Storage vMotion to move the virtual machine storage to the NetApp storage array, which enables the virtual machine to keep running while being moved to the NetApp array.
2. Shut down the virtual machine from VMware.
3. Perform the conversion using MVMC.
4. Start the converted virtual machine on Hyper-V that is connected to the same NetApp storage array. This allows the amount of downtime for the virtual machine to be minimal.
5. Live Storage Move is used on Hyper-V to move the running virtual machine to its permanent storage location.
Microsoft has actually announced some exciting enhancements coming to MVMC, which I've summarized here; these are accurate at the time of this writing.
1. MVMC 2.0: Planned Release—Spring 2014 The main focus for this release are updates to support various distributions of Linux, Azure VM conversion, as well as PowerShell support.
2. Highlights:
· On-Premises VM to Azure VM conversion
· PowerShell interface for scripting and automation support
· Added support for vCenter & ESX(i) 4.1, 5.0, and now 5.5
· VMware virtual hardware version 4–10 support
· Linux Guest OS support including CentOS, Debian, Oracle, Red Hat Enterprise, SuSE Linux Enterprise, and Ubuntu.
3. MVMC 3.0: Planned Release—Fall 2014 For the V3 release the focus is on P2V as well as efficiency improvements.
4. Highlights:
· Physical to virtual (Hyper-V) machine conversion (supported versions of Windows)
Take the time to fully understand your virtualization requirements, equate these to Hyper-V and System Center configurations, ensure everyone is trained accordingly, and make sure the processes are updated. Create detailed migration plans with fallback procedures, testing processes, and success criteria. With the proper preparation and planning, the migration can be smooth and invisible to the users.
The Bottom Line
1. Understand how System Center compares to vSphere solutions. System Center comprises a number of components that each provide different functionalities. vSphere has a similar set of solutions, and a basic mapping is shown here:
· SC Virtual Machine Manager = vCenter Server
· SC Data Protection Manager = vSphere Data Protection Advanced
· SC Operations Manager = vCenter Ops Mgmt Suite
· SC Orchestrator = vCenter Orchestrator
· SC Service Manager = vCloud Automation Center
· App Controller and SC Service Manager = vCloud Director
The levels of functionality are not exactly the same, but the mapping shows the key functional area mapping of the products.
7. Master It How is System Center licensed?
2. Convert a VMware virtual machine to Hyper-V. There are two aspects to converting a virtual machine; there is the virtual machine configuration, such as the number of vCPUs, memory, network connectivity, and so on, and then there are the virtual hard disks that contain the operating system and data. There are various solutions to address both parts of the conversion. The primary Microsoft VMware to Hyper-V tool is Microsoft Virtual Machine Converter, which performs migrations in an interactive fashion. Command-line tools are also available.
0. Master It How can MVMC be used as part of a bulk conversion process?
1. Master It How does NetApp help with VMDK to VHD conversions?