Bringing It All Together with a Best-of-Breed Cloud Solution - Mastering Hyper-V 2012 R2 with System Center and Windows Azure (2014)

Mastering Hyper-V 2012 R2 with System Center and Windows Azure (2014)

Chapter 12. Bringing It All Together with a Best-of-Breed Cloud Solution

A large number of technologies have been covered in this book so far—on-premises technologies and those that are available through the public cloud. It can seem daunting to know which technology to use in different scenarios. This chapter will look at all the technologies and provide some guidelines for when to use them, and you'll also see what other companies are doing.

In this chapter, you will learn to

· Identify the overall best architecture for your organization

Which Is the Right Technology To Choose?

The most important step in choosing the right technology is to have a direction for your organization. Does your organization want to focus on the public cloud first and have a minimal on-premises infrastructure? Is it investing in brand-new datacenters and servers and looking to maximize that investment by focusing its on-premises infrastructure? Or does it want to achieve best of breed with a hybrid approach?

Having a direction is important, but it's also critical to know your limits. By this I mean what can your organization realistically implement and support with its given resources, which includes budget. It's doubtful that a 50-person company with a single “IT guy” could operate a complete on-premises infrastructure and have the facilities to have a disaster recovery location. At this point, the company has to evaluate where its IT resources should be focused and for some IT services, look at external solutions. A great example I see a lot in the industry, even for the very largest organizations, is using Software as a Service (SaaS) for email and collaboration, such as the Microsoft Office 365 service. Messaging can be complex. It's considered a tier-1 application for many companies, which means it must always be available, and so rather than try to architect what can become complex messaging solutions, it's easier for organizations to effectively outsource this.

Take time to understand the types of IT services your organization needs to provide. Understand the data retention, backup, and compliance requirements and regulations for your business. Understand the criticality of the system and the services it depends upon. Then look at your resources and what you can support because this will certainly help guide your direction. You may have services on premises today that would be a good fit to move to the cloud because contracts are ending, hardware could be reused for other projects, and so on. Many organizations today spend huge amounts of time, effort, and expense on applications and services that really consume way more than they should, especially in proportion to their benefit to the organization.

Consider the Public Cloud

Looking at the public cloud services available, if I'm a new company I would be thinking about using them where possible. Email, collaboration, application platforms, and customer relationship management—these are all critical areas that require large initial investments to get running. Using a public cloud solution such as Software as a Service (SaaS) or Platform as a Service (PaaS) allows you to “pay as you go,” which means you are paying a small amount when your company starts and the amount you pay grows as your company grows. That's perfect for a growing startup.

If I'm an established company and I'm looking at ways to cut down on my IT expenses or diversify them, moving some services off premises and to the cloud may make a lot of sense, particularly if I'm looking for new features or site disaster recovery capabilities. Using a cloud service like Office 365 instantly gives an organization enterprise email, communication, and collaboration resources that are replicated across multiple sites with a per-user, per-month fee structure. When I talk to organizations, I hear more and more the desire to move from capital expenditure (cap ex) to operational expenditure (op ex), and using a pay-as-you-go public cloud service removes the cap ex part almost completely. Keep in mind that moving services to the public cloud is not “free.” Most likely you will need help from a consulting organization to enable a smooth migration process because there will likely be period of time you will have a hybrid solution, such as for email (some mailboxes may be on premises while others are in the cloud). Some services will remain hybrid services. For example, I've seen some organizations that host Exchange mailboxes on premises for office-based workers but use Office 365 for other workers, such as those in retail locations that have a higher turnover or can function with a lower quality of service. I've also seen the opposite, where the most important people in a company have their email hosted on Office 365 to ensure its availability while everyone else stays on premises.

If a new application is needed for the short term, or if high availability and growth potential are requirement, hosting it on Windows Azure would be a great choice. Development scenarios are a great fit because they have high turnover with environments constantly being created and deleted, and without a private cloud on premises, that could result in a lot of work for administrators.

Familiarize yourself with the public cloud solutions available and use them in the right way. Use them where it makes sense, but don't use them just to sidestep internal provisioning processes or their shortcomings. Some organizations I have worked with took six weeks to provision a new virtual machine. Because of this long delay, business units decided to just use public cloud instead. That is a poor reason to use the public cloud. Fix the internal process using capabilities such as self-service and the private cloud that I've talked about in detail in earlier chapters.

Moving services to the public cloud has additional advantages. Typically, those solutions will ensure the availability of the service and perform the backups. It's also easy to scale up and down as you pay for what you use, but consider that many services are consumed from anywhere. I check my email at home, on my phone, and on a laptop at a restaurant, which means my company would have to provide Internet-based access to at least a portion of the corporate infrastructure if the service was housed on premises. By leveraging a public cloud solution, the organization does not have to provide that access. The service is already being offered on the Internet.

If you are creating a new custom application, consider whether it is a fit for a Platform as a Service (PaaS) solution such as Windows Azure. Something like this will minimize the ongoing IT overhead required because the only work to do is to maintain the application. For other applications and workloads that you want to run in the public cloud, using Infrastructure as a Service (IaaS) most should work without modification, which is a key principle of IaaS.

If your organization utilizes Windows Azure for just development, testing, and stand-alone projects, then no communication may be required between Windows Azure and your on-premises network outside of the standard Internet-based communications using end points defined using the cloud service's virtual IP. What is more common is that seamless connectivity between Windows Azure and the on-premises network is required to enable cross-premises connectivity. To enable the cross-premises connectivity, you need to configure the Windows Azure VPN gateway functionality. On the on-premises side, a hardware gateway can be leveraged or a software gateway, such as, for example, Windows Server Routing and Remote Access Service (RRAS).

Before implementing the Windows Azure gateway, you need to have created a virtual network with subnets defined that will be used by virtual machines created after you create the virtual network. It is important to use an IP scheme for the virtual network in Windows Azure that does not conflict with any on-premises IP allocation. When you use a different IP address range in Windows Azure, IP traffic will be able to be routed cross premises. If the on-premises gateway device that is used to connect to Windows Azure is not the default gateway for on premises, you will need to add manual routes so on-premises traffic that is destined for Windows Azure will route correctly. Also make sure all on-premises IP scopes are defined within Windows Azure correctly to ensure correct routing of traffic from Windows Azure to the on-premises network.

At the time of this writing, only one VPN connection can be configured from Windows Azure to an on-premises location. The primary datacenter should be used to connect to Windows Azure. When Windows Azure supports multiple gateways, it would be advisable to establish at least one additional VPN connection to another site to provide redundancy from site failure in addition to more efficient routing of traffic.

Once network connectivity is established cross premises, most likely some operating system instances running in Windows Azure will need to be domain joined. This introduces a number of considerations. One requirement is name resolution via DNS. Initially, configure the virtual network in Windows Azure to use on-premises DNS servers for name resolution, which will allow machines to locate domain controllers and join the domain. Using a shared DNS infrastructure between on-premises servers and Windows Azure will also allow cross-premises name resolution.

Within Active Directory, create a separate Active Directory site for the IP ranges used in Windows Azure and create a site link to the actual on-premises location that has connectivity. Make sure to set the site link cost and replication interval to values that meet your requirements. The default replication every 3 hours is likely not fast enough.

The next decision is whether Active Directory domain controllers should be placed in Windows Azure. Initially, many organizations have security concerns about placing a domain controller in Windows Azure for fear of directory or security compromise, which would potentially expose the entire contents of the directory service. As previously discussed, the Microsoft datacenters likely have far more security than any normal company could hope for. When the domain controller in Windows Azure is configured, care is taken to make sure end points that aren't required are not exposed and to ensure that firewall services and monitoring is in place. These are the same steps you would take for an on-premises domain controller, but you need to be aware if any end points defined for the virtual machine are directly accessible on the Internet. Most likely, the domain controller would also be a global catalog, or at least one of them if you place multiple domain controllers in Windows Azure. For a small number of domain-joined machines in Windows Azure, the authentication traffic and other directory services data could be facilitated by the on-premises domain controllers and accessed using the VPN gateway, but as the number of domain-joined Windows Azure resources grows, it will become necessary to have a local domain controller.

Companies often consider using a read-only domain controller (RODC) in Windows Azure because an RODC has passwords for only a subset of the users cached and cannot make changes, which minimizes damage if the RODC is compromised. The decision depends on which services are running in Windows Azure and if they work with an RODC. If a service does not work with RODCs, then there is no point in placing an RODC in Windows Azure and you will need a regular domain controller or will need to accept that the Active Directory traffic will need to traverse cross premises. Another option is to create a child domain for Windows Azure.

Once a domain controller is running in Windows Azure and it is configured as a DNS server, the virtual network can be modified to use the domain controller(s) in Windows Azure as the primary DNS server. Remember to not deprovision the domain controller(s) because this could result in an IP address change. However, using a small, separate subnet just for domain controllers can help alleviate this problem by reducing the possible IP addresses that can be allocated to the domain controllers and stopping other VMs from using those IP addresses.

With cross-premises connectivity and Active Directory services, you can really open up the services that can be placed in Windows Azure. I see many organizations using a hybrid approach. Often they start with testing and development in Windows Azure, and once the technology is proven and trusted, it is expanded. Remember to constantly look at what new capabilities are available, and while initially you could, for example, deploy an IaaS VM running SQL Server databases, over time those databases may be able to be moved to SQL Azure instead, reducing your management overhead.

An interesting use case I have seen is to use Windows Azure as the disaster recovery site. At the time of this writing, Windows Azure cannot be the target for Hyper-V, which means you cannot replicate at a VM level the virtual machine to Windows Azure. Instead, you need to look at each service and how to replicate. Here are some approaches. Keep in mind that there is not one right answer; it will depend on the workload.

· For Active Directory, deploy domain controllers to Windows Azure and use Active Directory multimaster replication to keep the Windows Azure domain controllers up-to-date.

· For file data, one option would be to use Distributed File System Replication (DFS-R) to replicate data to a file server running in Windows Azure IaaS. Distributed File System Namespaces (DFS-N) could be used to give users transparent access to the data. Another option is to use StorSimple, which will also store data in Windows Azure. However, at time of writing there is not a virtual StorSimple appliance that would give access to the data stored in Windows Azure from a VM running in Windows Azure. This is expected to change. Another option would be to periodically copy data using Robocopy or PowerShell.

· SQL databases can be replicated using SQL Server 2012 AlwaysOn, which should be used in asynchronous mode. This will require stretching a cluster between your premises and Windows Azure, which I discuss at the following location:http://windowsitpro.com/hybrid-cloud/extend-failover-cluster-windows-azure

· SharePoint instances are mainly SQL Server data, therefore deploy SharePoint instances in Windows Azure and use SQL Server AlwaysOn to replicate the SharePoint data. For non–SQL Server stored data, use another process to replicate file system configuration periodically or as part of a change control process.

· Exchange and Lync are not supported to run in IaaS. If you want disaster recovery for your Exchange and Lync, the best solution is to migrate users to Office 365 if you need offsite capabilities. This type of migration will likely be a major undertaking and you will run in a hybrid mode during the migration.

· Other applications will need to use a combination of technologies. If the application uses a SQL database, use SQL replication to replicate the database. Use file system replication to replicate other file system assets.

· For replication of anything running in an operating system, one third-party solution I found is Double-Take, which you can find at the following location:

www.visionsolutions.com/products/Virtual-Server-Protection.aspx

It provides replication from within the OS to another OS instance. In the future, if Hyper-V Replica is supported for Windows Azure, this would be another option.

To ensure mobility between on-premise infrastructure and Windows Azure, make sure that for those workloads that need to be transportable, you use only features common to both environments, such as the following:

· Generation 1 virtual machines

· VHD disk format of 1023 GB maximum size

· One network adapter

· No requirement on IPv6 communications

There is also an interesting licensing consideration for placing workloads in Windows Azure. Your organization may already have a large number of Windows Server licenses, but they are not required when using Windows Azure because the Windows Server license is included. It may be possible to repurpose licenses for other on-premises workloads. Your organization may have multiyear agreements for licenses, in which case you may be able to negotiate converting elements of the agreement to cloud-based services.

Ultimately, the public cloud offers many capabilities. Your organization should look at each one and decide if it is a good fit for some workloads. Then deploy in a carefully planned manner to maintain service availability and supportability.

Decide If a Server Workload Should Be Virtualized

While the public cloud is great, there will be many workloads that you want to keep internally on your company's systems. As you read this, your company probably has some kind of server virtualization. It could be VMware ESX, it could be Microsoft Hyper-V, it could be Citrix XenServer, or it could be something else, and likely your organization is using multiple hypervisors. The most common scenario I see is ESX organizations evaluating Hyper-V so they have both in their datacenter.

The default for most organizations is virtual first for any new server workload except for servers with very high resource requirements and some specialty services, such as domain controllers that provide the Active Directory domain services for the environment. (Typically, though, only one domain controller is a physical server while all others are virtualized.)

Most of these exceptions are based on limitations of the previous generation of hypervisors.

The reality is that with Windows Server 8 and the ability to run very large virtual machines with 64 vCPUs, NUMA topology projected to VM, 1 TB of memory, direct access to network cards using SR-IOV if needed, 64 TB VHDX virtual storage, shared VHDX, and access to both iSCSI and Fibre Channel storage where necessary, there really are very few workloads that now cannot run in a virtual machine and run the same as on bare metal, including high-resource workloads such as SQL Server. Even if you had a physical server that only had one virtual machine running because it needed all the resources, virtualizing is a good idea because all the other benefits of virtualization would still apply:

· Abstraction from the underlying hardware, giving complete portability

· Ability to move the VM between physical hosts for hardware maintenance purposes

· Leveraging the high availability and replica features of Hyper-V where needed

· Consistent deployment and provisioning

There may still be some applications you cannot virtualize, either because they need more than the resource limits of a virtual machine or, more likely, because of supportability. Some application vendors will not support their applications running in a virtualized manner, sometimes because they have not had time to test it, or the vendor may have its own virtualization solution so it will support only its product on its hypervisor. For example, Oracle supported only its products on its own Oracle VM hypervisor, but this changed in 2013 and Oracle now supports its products on Hyper-V and Windows Azure. Prior to this shift in support, organizations had to make a decision at this point on how to proceed. Remember, applications don't really know they are running on a virtual operating system. To the application, the operating system looks exactly the same as if it were running on bare metal, except that certain types of devices, such as network and storage devices, will be different because they are virtual devices, so virtualizing an application should not introduce problems with today's hypervisors. Carrying on with the Oracle example, in my experience, even before the supportability update, the Oracle products worked just fine on Hyper-V, and Oracle support would even try to assist if there was a problem with it running on a non-Oracle hypervisor on a best efforts basis. However, organizations have to be prepared because if a problem cannot be fixed, the application vendor may ask for the problem to be reproduced on a supported configuration, such as on a bare-metal system without virtualization or on a supported hypervisor. Technology can help here. There are third-party solutions that normally help with physical-to-virtual conversions when organizations want to move to a virtual environment and can also take a virtual machine and deploy to bare metal. This could be an emergency backup option for organizations that want to standardize on one hypervisor and run all applications on virtual operating systems even when not officially supported.

It really comes down to an organization's appetite for some risk, however small, and how critical the application is should it hit a problem. If you have a noncritical application, then virtualizing in a nonsupported configuration that has been well tested by the organization is probably OK. If it's a critical system that would need instant support by the vendor if there was a problem, then running in an officially unsupported configuration is probably not the best option.

In the past, there were concerns about virtualizing domain controllers. That is not the case with Windows Server 2012 and Windows Server 2012 Hyper-V, which have special capabilities directly related to Active Directory, VM-GenerationID, as covered in Chapter 6, “Maintaining a Hyper-V Environment.” Most companies I work with today virtualize domain controllers, and in Windows Server 2012 failover clustering, there is even protection from the cluster not being able to start if a domain controller was not available, which was a previous concern. Essentially, prior to Windows Server 2012, if all the domain controllers were running on a cluster, there was a problem if you shut down the cluster. Normally virtual machines cannot start until the cluster service starts. The cluster service could not start without contacting a domain controller. Therefore, if the domain controller was a virtual machine, nothing could start. Windows Server 2012 failover clustering removed this dependency.

I've focused on Windows workloads and how Windows can be virtualized, but many organizations have some non-Windows servers as well. Hyper-V has great support for a number of Linux distributions, and even Linux distributions that are not officially supported will likely still work and can use the Hyper-V integration services to give you a great experience. This equally applies to Windows Azure, which has a wide range of Linux support. Just because a workload is not a Windows Server workload does not mean it cannot be virtualized. There are some Linux/Unix workloads that cannot be virtualized on any x86 hypervisor because they are using a non-x86 architecture. A good example is Solaris running on SPARC, and this cannot be virtualized on a x86 hypervisor because SPARC is a different hardware architecture. If you are using the x86 version of Solaris, then it would probably run on Hyper-V. However, at the time of this writing, it's not a supported Linux distribution for Hyper-V, and if you are running this Solaris workload, it's probably pretty important, so running in a nonsupported manner may not make sense for you.

When you are using clustering within virtualized environments that require shared storage, there are a number of options. Where possible, use Shared VDHX because this maintains complete virtualization of the storage and removes direct storage fabric visibility for the virtual machines. If Shared VHDX is not an option—if you're not running Windows Server 2012 R2 Hyper-V or you have a mixed cluster of virtual and nonvirtual operating systems—then virtual Fibre Channel or iSCSI can be used and perhaps even a SMB 3 file share if the service supports it.

Remember that just because Hyper-V has a great replication technology with Hyper-V Replica, this should not be the first choice. It is always better to use an application-/service-aware replication technology such as Active Directory replication, SQL AlwaysOn, Exchange Database Availability Groups, and so on. Only if there is no native replication solution should Hyper-V Replica be used. Remember that replication is not a replacement for backups.

Do I Want a Private Cloud?

I talk to many customers about the private cloud, and some are open to it and some just hate the idea. This is largely because of a misunderstanding about what the private cloud has to mean to the organization. Instead of asking if they want to use a private cloud, I could ask the following questions and get very different responses:

· Do you want easier management and deployment?

· Do you want better insight into networking and storage?

· Do you want to abstract deployment processes from the underlying fabric, enabling deployments to any datacenter without worrying about all the underlying details like which SAN, VLAN, IP subnet, and so on?

· Do you want to better track and even show and charge back based on usage to business units?

· Do you want to be able to deploy multitiered services with a single click instead of focusing on every virtual machine that is needed?

· Do you want to simplify the process of creating new virtual environments?

I would get yes answers from pretty much everyone. And I could take it a step further by asking, “Do you want to enable users to request their own virtual environments or service instances through a self-service portal with full-approval workflow within quotas you define that are automatically enforced, including virtual machine automatic expiration if required?”

I may start to get some head-shaking on this one. IT teams can have concerns about letting users have self-service portals even with quotas, even with approval workflows, and even will full tracking. That's OK. As with using public cloud services, when implementing end-user self-service solutions, it can take some time for IT to trust the controls and process and see that it won't result in VM sprawl and a wild west of uncontrolled VM mayhem. In reality, with the private cloud there will be better tracking and more controls than with the processes used in most organizations today.

The key point is that adopting a private cloud only brings benefit to IT departments and the organization as a whole, allowing far greater utilization of the resources the company already has, better insight into those resources, much better responsiveness to requirements of the business (such as provisioning new environments), and the ability for everyone to really focus on what they care about, the application.

Go back to those first questions I asked. If your answers to any of those are yes, then a move to the private cloud model makes sense, and remember that you don't have to expose all of its capabilities to end users. You can have self-service capabilities but let only the IT teams use them to better enable provisioning processes. It's still helping the environment.

Remember that the private cloud provides a foundation on which you can offer many types of services. You can offer basic virtual machines as an in-house Infrastructure as a Service. You can offer environments with certain runtime environments like .NET or J2E to enable Platform as a Service where business units can easily run their applications. You can even have complete services that model an entire multitiered application through service templates, thus offering Software as a Service. It's really whatever makes sense for your organization. Typically organizations will start with basic Infrastructure as a Service, offering virtual machines, and then build up from that point on as confidence and experience grows.

My recommendation is to get your organization on the latest version of Hyper-V. The new capabilities really make it the best virtualization platform out there. It adds support for far larger virtual machines and larger clusters. It has better replication and availability features and better support for direct access to network hardware and network virtualization. It has full PowerShell management and guest-level Fibre Channel access, which means more workloads can be virtualized and therefore you can have a simpler datacenter. And that's just scratching the surface.

It probably seems daunting. There seems to be a lot of change going on, and if you are currently struggling to keep things running by either not patching servers or patching them manually and always installing servers by running round with a DVD or ISO, this will seem like a huge difference, but it's a good difference. There is a large time investment to initially get these processes and solutions in place, so some organizations may need to bite the bullet and get a consulting company in to help them get running. If that's the case with your company, make sure they don't work in isolation. Work with the consultants, and be part of the decision and planning because that way, when they leave, you understand why things were done as they were and can carry on any best practices that were implemented.

Enabling Single Pane of Glass Management

Virtualization does not have to change the way you manage your datacenter. It would be possible to carry on managing each operating system instance, deploying each instance by booting to an ISO, but you really are not getting the most from the technologies available and are making life far harder than it needs to be.

One of the biggest changes that virtualization introduces to the datacenter initially is how you provision new servers. Instead of installing operating systems via an ISO file, use virtual machine templates that can include customizations, join a domain automatically, and install applications and run scripts. Most likely you will have a few virtual hard disks that can be used by many different templates that can be tailored for the exact needs of the organization.

The longer-term goal is to shift from creating virtual machines to deploying instances of services that are made up of many virtual machines and service templates. Using service templates is a big change in how services and virtual machines are provisioned. The benefits they bring—such as easy deployment, updating of deployed instances by updating the template, server application virtualization, and automated configuration of network hardware—really make the use of service templates something that should be a goal. This is not to say normal virtual machines will never be deployed. Service templates are great to enable the deployment of services within an organization, but there will always be those applications that just need to be deployed once, and often the additional work in creating a service template does not make sense.

What is important, though, is that you don't end up with two completely different management solutions or even potentially more:

· One management solution for virtual machines

· One management solution for physical machines

· One management solution for the virtual infrastructure such as the hypervisor

· One management solution for the public cloud resources such as Window Azure IaaS virtual machines

The goal is to manage your environment as simply and with as few tools as possible. Look for management solutions that enable complete management without having to put in lots of point solutions for different aspects of your datacenter. Patching is a great example: There are a number of solutions that will patch just virtual machines, and there are different solutions to patch hypervisors, others for desktops. A solution such as System Center Configuration Manager (SCCM) 2012 R2 will provide patching for all servers, physical or virtual, and your desktops. Also, with Hyper-V, because it's part of Windows, SCCM can patch the hypervisor itself. One solution is to patch everything. SCCM can also integrate with many third parties to actually also be able to apply updates to hardware (such as firmware and BIOS) plus also deploy updates for Microsoft and non-Microsoft applications.

The same idea applies to all aspects of your datacenter. Try to stay away from point solutions. System Center Operations Manager (SCOM) can monitor your physical servers, the virtual servers, the operating system, applications, custom .NET and J2E applications, networking equipment, and even non-Windows workloads in addition to monitoring and integrating with Windows Azure. This gives a complete view, from soup to nuts as they say. The same applies for backup, for service management, and for orchestration; keep it simple and minimize the number of separate tools.

From a virtual machine management perspective, System Center App Controller provides a single view of on-premises virtual machines that are managed by System Center Virtual Machine Manager (SCVMM) and also virtual machines running in Windows Azure and even virtual machines running with hosters that leverage Service Provider Foundation (SPF). The same can apply to provisioning with more complex workflows using System Center Service Manager (SCSM) to provide a service catalog fronting many different services, including virtual machine provisioning and management.

Orchestration is where I would like to really finish because it really brings together everything about the datacenter. As organizations use more and more IT, and your organization will have more and more operating system instances to manage, technologies like service templates help to bring the focus to the application instead of the operating system. However, there will still be large numbers of operating systems that require management. To really scale, you must look at automation capabilities and working with multiple operating system instances at the same time.

PowerShell really is a key part of enabling automation. Especially in Windows Server 2012 and above, basically everything that can be done with the GUI can be done with PowerShell. Actions can be scripted, but more important, they can be executed on many machines concurrently and in an automated fashion. Building on orchestrating tasks and beyond just PowerShell, take some time to look at System Center Orchestrator. Every client I talk to gets very excited about Orchestrator in terms of its ability to really connect to any system that exists through various methods and then to graphically create runbooks, which are sets of actions that should be performed in a sequence and based on results from previous actions across all those connected systems. Start with something small, some set of actions you perform manually each day, and automate them in Orchestrator. Another good way to learn is to take a simple PowerShell script you have and model it in Orchestrator instead. Once clients I've worked with start using Orchestrator, it becomes the greatest thing since sliced bread and gets used for everything. This does not mean you won't need PowerShell. I find that with Orchestrator many tasks can be 80 percent completed with the built-in activities, but then PowerShell is called from Orchestrator to complete the final 20 percent.

For organizations taking a hybrid approach, providing a seamless experience for the users of services is vital. System Center App Controller provides the seamless pane of glass, but it's key for the IT organization to own the process of deciding if new virtual machines will be deployed on premises or in Windows Azure. I've had great success using System Center Orchestrator with runbooks that utilize PowerShell to receive a provisioning request made via System Center Service Manager. The logic of whether to deploy to on-premise or in Windows Azure is made by the logic built into the Orchestrator runbook and based on the target use for the new environment, the available capacity on premises, the predicted length of usage of the environment, and the capabilities requested. Once the logic provisions the virtual machine either on-premise or in Windows Azure, the requesting user receives an email with an RDP file to enable connectivity or the new service is added to their list of services in App Controller. The point is that the provisioning process and ongoing interaction is the same no matter where the virtual machine is actually hosted.

The Bottom Line

1. Identify the overall best architecture for your organization. As this chapter has shown, there are a lot of thing to consider when choosing a cloud-based solution for an organization. It's important to take the time to understand the organization's strategic direction, its resources, and the needs of its workloads. Only then can an architecture be created that utilizes the strengths of the different options.

1. Master It What is the most important first step in deciding on the best architecture?