Load Balancing - Implementing NetScaler VPX (2014)

Implementing NetScaler VPX (2014)

Chapter 3. Load Balancing

A load-balanced service is accessed by users every day, when they book a plane ticket on a website, watch the news, or access social media. With the use of load balancing, we have the ability to distribute user requests or client requests for content and applications across multiple backend servers where the content is located. In this chapter, we will cover the following topics:

· How load balancing works

· How to load balance a generic web application

· How to load balance Citrix services such as the XML service and DDC servers

· How to load balance Microsoft products such as SharePoint, Exchange, and MSSQL

A load-balanced service within NetScaler allows us to distribute user requests from different sources based upon different parameters and algorithms, such as least bandwidth or least connections. It also provides persistency, which allows us to maintain a session to the same server. These features allow us to redirect clients to a backend server, for example, a server with the least connections used.

A regular generic load-balanced service might look a bit like the one shown in the following figure. We have two backend web servers, which answer on port 80, and they are publicly accessible via a VIP address, which is the load-balanced service.

Load Balancing

So, in essence, a load-balanced service in NetScaler consists of the following:

· Servers: These are the backend servers that host a service.

· The IP address and server name are 10.0.0.3 and server1, respectively

· Service: Here, we define what service is hosted on the backend servers. We also define a monitor, which is used to check if the service is responding on the backend server.

· The service name is IIS1

· The IP address and server name are 10.0.0.3 and server1, respectively

· The protocol and port are HTTP and 80, respectively

· By defining a monitor, HTTP allows the service object to check if the server is responding to HTTP traffic

· Virtual server: Here, we define the IP, port, and protocol on which the load-balanced service will answer, what kind of service is attached in the backend, and how it will load balance between the different services at the back.

· vServer name: The vServer name is IIS. This field is only for description.

· VIP address: This is the external address of the load-balanced service. Here, it is 80.80.80.80.

· Protocol and Port: This is the protocol and port on which this service should respond. Here, it is HTTP and 80, respectively.

· Services or Service Groups: This defines what backend services are going to be included in the load-balanced object. Here, they are IIS1 and IIS2.

· Load-balancing method: This defines what kind of load balancing method is chosen. Here, it is the least connection method.

If we have multiple backend servers hosting the same service, it is much more convenient to use service groups. This allows us to easily bind a service against multiple servers simultaneously.

Note

When starting the deployment of load-balanced services, we need to have the basic configuration in place, such as placement of SNIP to allow communication with backend servers. A quick rule of thumb is:

· Initial setup with NSIP

· Platform license in place

· SNIP defined for backend connectivity

· Backend servers added to server list

· Services bound to the backend server

First, we need to enable the load balancing feature. This can be done either by right-clicking on the load balancing menu under Traffic Management in the GUI and clicking on Enable, or by using the following CLI command:

Enable ns feature lb

Load balancing a generic web application

In order to deploy a load-balanced web application, we first need to have servers in place that respond to some sort of a network service. In this example, we have two Internet Information Servers (IIS) running on Windows Server. These are accessible via the IPs10.0.0.2 and 10.0.0.3 internally, and they respond to traffic HTTP on port 80.

First, we need to add the IP addresses to the server list. This can be done by going to Traffic Management | Load Balancing | Servers, and clicking on Add. Here, we just enter the IP address of the backend servers and click on Create. We have to do this for every backend server. After that is done, we have to add a service to the servers. This can be done by going to Traffic Management | Load Balancing | Services, and clicking on Add. Here, we have many different options. First, we need to choose the server we entered earlier, choose a type of protocol, enter a port number, and give the service a name.

Load balancing a generic web application

Now, we add a monitor to the service, and click on Create. It will automatically start using the monitor to check the state of the backend server. If we open the service again, the monitor will show statistics about the response time and the status.

Note

Monitors are what makes load balancing unique. Unlike regular round-robin solutions, they will probe the backend servers, and provide NetScaler with backend server health status.

We have other types of monitors that we can use as well. All of the default monitors are listed under Traffic Management | Monitors. There are many types of built-in monitors that we can use. They are explained as follows:

· TCP: This monitor checks the availability using the TCP three-way handshake.

· HTTP: This monitor uses HTTP's GET request.

· PING: This monitor uses ICMP.

· TCPS: This monitor checks the availability using the TCP three-way handshake and a successful SSL handshake.

· HTTPS: This monitor uses HTTP's GET request and a successful SSL handshake.

All of the monitors have parameters that define how often they should probe a service before they set it as offline. Some monitors also have extended parameters. This can be viewed by opening a monitor and going into the Special Parameters pane.

These monitors listed here are just some examples. We also have monitors that have the suffix of ecv. These are used when we need to send a specific payload with a monitor. For example, if we want to check for custom headers on a web server, we can use thehttp-ecv monitor. The same can be done with other monitors as well.

Note

There are also some monitors that are not built-in by default. We can add custom monitors for the Citrix web interface, XML service, DDC, and so on. These can be added by going to Monitors | Add. On the right-hand side under Types, there are different Citrix services that we can add a custom monitor to. For example, if we choose CITRIX-XML-SERVICE, we need to specify an application name in the Special Parameters pane. If we click on Create, we can use this monitor when setting up a load-balanced XML service.

Now, we need to create a service in NetScaler for each of the backend servers that are hosting the service. Also, a service is bound to a port. That means we cannot create another service on a server that is bound to another service.

If we want to limit the amount of bandwidth or number of clients that can access the backend service, we can add thresholds to the service. This can be done by going to Service | Advanced | Thresholds. This is useful if you have some backend servers that have limited bandwidth, or when you wish to guard yourself against a DDoS attack.

After we have created a service for each of our servers, we can go on to create the load-balanced virtual server. Go back to Traffic Management | Load Balancing | Virtual Servers, and click on Add. There are multiple settings that we need to set here. First, we need to enter a name, IP address, port, and protocol. Now, what kind of protocol we choose here is essential. For example, if we choose SSL, and the backend servers are responding on regular HTTP traffic, NetScaler will automatically do SSL offloading. This means that NetScaler will terminate the SSL connection at the VIP, and then fulfill regular HTTP requests to the backend servers. The advantage of this is that the backend web servers do not need to use CPU cycles for handling SSL traffic.

When we enable SSL as a protocol on the vServer, the SSL Settings pane is enabled and here we need to add an SSL certificate for our service. It is important that DNS is configured properly. If the DNS name and the subject name in the certificate do not match, we will get a warning, as NetScaler will not be able to validate the certificate. Also, it is important that we have the full SSL chain in place. If not, NetScaler cannot validate the certificate.

If company requirements are that all traffic needs to be encrypted from client to server, we can use SSL bridging. This enables NetScaler to bridge traffic from the clients to the backend servers. When we enable SSL bridging, NetScaler disables some features as it cannot see into the packets because the traffic is encrypted. For example, features such as content switching, SureConnect, or cache redirection will not work. Also, with SSL bridging, we do not need to add a certificate, as it is already available in the backend servers. So for this example, we will use SSL and add a certificate in the SSL Settings pane. After we have done this, we have to bind the backend services or service groups to the vServer. If we do not add a service to the vServer, it will be listed as DOWN until one has been added and assigned. After we have added the required information, the vServer should look something like the following screenshot:

Load balancing a generic web application

Now, we should define the load balancing methods and persistency. There are multiple ways to load balance between the different services. They are explained as follows:

· Least connection: In this method, the backend service with the fewest active connections is used. This is the default method.

· Round robin: In this method, the first session is handed to the service that is on top of the list, and the next connection goes to the second service on the list. This continues down the list and then starts over again.

· Least response time: In this method, the service that has the fastest response time is used.

· URL hash: In this method, when a connection is made to a URL for the first time, NetScaler creates a hash to that URL and caches it. So frequent connections to the same URL will go to the same service.

· Domain hash: In this method, when a connection is made to a domain name for the first time, NetScaler creates a hash for that name and caches it. So, frequent connections to the same domain will go to the same service. The domain name is fetched either from the URL or from the HTTP headers.

· Destination IP hash: In this method, when a connection is made to a specific IP address for the first time, NetScaler creates a hash for that destination IP, and redirects all connections to that IP address through the same service.

· Source IP hash: In this method, when a connection is made from an IP address for the first time, for example 10.0.0.1, NetScaler creates a hash out of the source IP. Frequent connections made from the IP and/or subnet will go to the same service.

· Source destination IP hash: In this method, NetScaler creates a hash based upon the source and destination IP. This ensures that a client will be connected to the same server.

· Call ID hash: In this method, NetScaler creates a hash based upon the Call ID in the SIP header. This makes sure that an SIP session is directed to the same server.

· Source IP source port hash: In this method, NetScaler creates a hash based upon the source and source port. This ensures that a particular connection will go to the same server.

· Least bandwidth: This method is based upon the service with the least amount of bandwidth usage.

· Least packets: This method is based upon the service with the fewest packets.

· Custom load: This method allows us to set custom weights.

· Token: In this method, NetScaler selects a service based upon a value from the client request using expressions.

· LRTM: In this method, NetScaler selects a service based upon the one with the least response time.

Some of the load balancing methods are explicitly used for some special services and protocols to make sure that when we set up load balancing, and want to use a custom load balancing method, the method is supported for the service. For example, Lync 2013 uses a special NetScaler monitor, which is listed in the setup guide.

Here, we will use the round-robin method. After we have chosen a way to load balance, we can choose how the connection will persist to the service. Again, there are different methods for a connection to persist. They are listed as follows:

· Source IP: In this method, connections from the same source IP are persisted to the same server.

· Cookie insert: In this method, each client is given a cookie, which contains the IP address and the port of the service that the client is accessing. The client uses the cookie to maintain a persistent connection to the same service.

· SSL session: This method bases persistency upon the SSL session ID of the client.

· Rule: This method is based upon custom made rules.

· URL passive: This method bases persistency upon URL queries.

· Custom server ID: In this method, servers can be given a custom server ID, which can be used in URL queries.

· Destination IP: In this method, connections to the same destination IP are persisted to the same server.

· Source and destination IPs: In this method, connections from the same Source IP to the same destination IP are persistent.

· RTSP session ID: This method bases persistency upon the RTSP session ID.

· SIP call ID: In this method, persistency is based upon the same SIP call ID.

Some of the persistency types are specific to a particular type of vServer, and all persistency types have a timer attached to it, which defines how long a connection should persist to a service. You can view more about the different persistency types, and what kind of protocol they can be used for at http://support.citrix.com/proddocs/topic/netscaler-load-balancing-93/ns-lb-persistence-about-con.html.

Note

We also have the option to set a backup persistence. This is used when a connection does not support the primary type.

Now, let us explore a bit about the more advanced configurations that we can configure on a vServer.

Assigning weights to a service

Assigning weights to a service allows us to distribute load evenly, based upon parameters such as hardware. If we have many backend web servers that have 4 GB RAM, and we have newly set up vServers, that have 8 GB RAM, then the new ones should have a higher weight. This can be done when we attach a service to a load-balanced vServer. The higher weight we set on a service, the more user-defined traffic/connections it can handle. This is shown in the following screenshot:

Assigning weights to a service

However, it is important to remember that not all load balancing methods support weighing. For example, all the hashing load balancing methods and the token load balancing method do not support weighing.

Redirect URL

Redirect URL is a function that allows us to send a client to a custom web page if the vServer is down. This only works if the vServer is set up using the HTTP or HTTPS protocols. This can be useful for instances where we have a planned maintenance or some unplanned failures, and we want to redirect users to a specific web page, where we have posted information about what is happening. This feature can be configured under vServer | Advanced | Redirect URL.

Backup vServer and failover

Backup vServer allows us to failover to another vServer, in case the main vServer should go down. This can be configured under vServer | Advanced | Backup vServer.

Note

An important point to note is that a NetScaler Gateway vServer can also be configured to be used as a backup vServer.

In addition to handling failover, we can also use the Backup vServer to handle excessive traffic, in case the primary vServer is flooded. This is known as a spillover. We can define spillover based upon different criteria, such as bandwidth connections. We can then define what the vServer should do if there are too many connections to the vServer, for example, if it should drop new connections, accept them, or redirect them to the backup vServer. These settings can also be configured in the same pane as the failover settings. Here, we need to configure the method and what kind of action we want it to take.

Note

If we have configured a backup vServer and a redirect URL for the same vServer, then the backup vServer takes precedence over the redirect URL.

We have now gone through the basics of setting up a load-balanced service, and some of the advanced configuration that we can set. Now, let us continue with this and use the basics to set up load balanced services for Citrix XenApp and XenDesktop.

Now, there are only certain particular services that we can set up as load-balanced in a Citrix environment. They are listed as follows:

· Storefront/Web Interface

· Desktop delivery controllers

· XML service

· TFTP for provisioning servers

Note

NetScaler includes a set of finished wizards, which allow us to easily create a load-balanced service for Citrix services such as WI, DDC, and XML. The examples in this book are not going to use the wizards, so as to give you a clear understanding of what lies underneath.

Load balancing StoreFront

StoreFront is the replacement for Web Interface, and is included by default in XenDesktop 7. This deployment should be in place before starting to set up a load-balanced service for it. We should configure StoreFront as part of a server group. More information about this can be found in the eDoc located at http://support.citrix.com/proddocs/topic/dws-storefront-21/dws-deploy-join.html.

Before we start configuring a load-balanced service for StoreFront, we need a monitor in place to verify that the StoreFront store is functioning and not just checking if it is responding on HTTP or HTTPS. NetScaler has a built-in monitor that we can use for this purpose, but we have to specify some additional parameters and create it first. Go to Load Balancing | Monitors, and click on Add. In the right-hand part of the window, choose STOREFRONT from the list, and then go to the Special Parameters pane. Here, we need to enter the name in the Store Name field, as shown in the following screenshot. This can be found in the StoreFront administration console.

Load balancing StoreFront

After this is done, give the monitor a name, and click on Create. Now, we can continue on with the setup as follows:

1. First, we need to add the backend servers that are running StoreFront to the server list.

2. Next, we need to bind a service to the servers. The difference from when we configured the generic web application is that we need to choose the custom-made monitor we created. This will make sure that the StoreFront service is functioning before a client connects to it. Another option we could configure here is to allow NetScaler to insert the client IP into the HTTP header. Because of the way NetScaler operates, the StoreFront server will never actually see the client IP, which is sometimes needed for troubleshooting and logging purposes. We can configure this while setting up the services by going to Advanced | Settings | Enable Client IP. Under the header box, we enter, for example X-Forwarded-For, to distinguish the name in the web server logs. When we have created the service for each of the StoreFront servers, it is time to create the vServer.

3. Go to Virtual Servers and click on Add. Enter an IP address, a port, and a protocol. Lastly, bind the backend services to the vServer. Now, depending on how we want the users to access the StoreFront resource, we need to consider what kind of protocol we set here. For example, if all users are accessing Citrix using the Gateway function, we should choose the HTTP protocol, and change the URL in the session policy to point to the new VIP created by the load-balanced server. If Citrix access is also available for internal users directly without using Gateway, we should choose SSL and add a certificate, and use NetScaler to handle the SSL traffic. If we want the traffic from the client to StoreFront to be encrypted, we should choose SSL_bridge under protocol type.

4. Most regular deployments use SSL here. After we have defined this, we should add a persistency to the vServer, which allows us to stay connected to the same StoreFront server. The recommended setting here is to use COOKIEINSERT, and the timeout value should be set to 0, which means no expiry. This will allow users to reconnect to the same StoreFront server as long as it is responding to requests. Also, if there are browsers that do not allow or are configured to not allow cookies, we should set up a backup persistency such as SOURCEIP.

Load balancing Web Interface

Setting up a load-balanced Web Interface is not very different from setting up StoreFront. The main difference is the monitor that is used for the services. NetScaler also has a predefined Web Interface monitor, which is called CITRIX-WEB-INTERFACE, but we have to create it before we can use it. We also have to enter a site path in the Special Parameters pane of the monitor window. This monitor checks for resource availability by authenticating an Active Directory user. You can read more about setting up Web Interface monitoring at http://support.citrix.com/proddocs/topic/netscaler-load-balancing-93/ns-lb-monitors-builtin-wi-tsk.html. Other than that, the configuration is the same as it would be with StoreFront. Enter the servers, bind the services, and create the load-balanced server.

Load balancing XML Broker

The XML Broker service is needed for communication between the web interface/StoreFront, and the data collector. Web Interface communicates using XML Broker to get information, such as application availability for a user, and available resources.

The XML Broker service is needed in a XenApp/XenDesktop environment, and can be deployed as a load-balanced service. Again, Citrix has made a custom monitor available, which we can use to monitor whether the XML Broker service is responding or not. To add the custom monitor, go to Load Balancing | Monitors, and click on ADD. Here, choose CITRIX-XML-SERVICE, and in the Special Parameters pane, enter an application name. The default application is Notepad. This monitor will open a connection to the service, and probe the XML Broker service to which it is bound. If the server responds as expected within the configured time period, the monitor marks the service as up. After this is added, we can start adding servers that have the XML service running under servers. Then, we add the XML service under services by choosing HTML as the protocol, and adding the port on which the XML service is running. Next, we need to create the load-balanced vServer, by choosing protocol HTTP and port 80, and binding the services to the vServer.

Note

Even though we can choose to create the XML service with HTTP, it is always considered a best practice to use SSL so as to secure communication, even if the traffic is internal. Also, if you intend to use HTTP for XML, a best practice is to use another port instead of port 80.

When we are finished with the configuration of the vServer, we can now use this IP when connecting to StoreFront or Web Interface. For example, if we want to use the vServer with Storefront, we can add the load-balanced server under Add Delivery Controllers |XenApp | Servers. Here, we can enter the IP address of the load-balanced XML service.

Load balancing Desktop Delivery Controller

With the release of XenDesktop 7 and the combining of XenApp/XenDesktop architecture, this has become a more crucial part to load balance. Yet again, there is a custom monitor that needs to be created called CITRIX-XD-DDC. In the Special Parameters pane, we need to enter an AD user, which can be used to validate credentials. This is shown in the following screenshot:

Load balancing Desktop Delivery Controller

Now, add the servers, bind them to the service, and then create a load-balanced vServer, as we have done for other load-balanced services.

Load balancing TFTP for provisioning servers

This is a new feature, which came with NetScaler 10.1. It is the ability to load balance TFTP servers. Before the 10.1 release of NetScaler, this required a great deal of work including the use of Direct Server Return (DSR) and other options. However, they are no longer required.

An important point to remember is that when you boot a virtual machine using PVS, it uses either PXE or the DHCP options, including options 66 and 67. This guide uses DHCP options to distribute the link to the bootstrap file.

Now, in order to set up load balancing for TFTP properly, we need a monitor that we can use to verify if load balancing is operational. To get a monitor for TFTP, we can follow the guidelines located at http://blogs.citrix.com/2011/01/06/monitoring-tftp-with-citrix-NetScaler-revisited-pvs-edition/.

After we have created the monitor, we need to add the servers where the TFTP service resides. Next, we need to create a service for each TFTP server. Here, we need to choose TFTP as the protocol, and in the Port field we need to enter 69. Then, we need to bind the custom-made monitor to the service. After this is done, we create the load-balanced vServer, where we enter an IP address, the name, the protocol (TFTP), and the port (69). When this is complete and we have created the vServer, we can alter the DHCP option 66 to point to the new VIP address that we created in NetScaler.

As a side note, it is also possible to deliver the bootstrap using HTTP instead of TFTP. This scenario is only viable for XenServer as it uses gPXE, which allows for extra features such as HTTP. This makes it a lot easier to load balance, as we only need to load balance a simple HTTP server, and change the option 67 boot filename to point to http://serverip/ARDBP32.BIN. However, this is not supported by Citrix and should only be used in environments where HTTP is a more suited protocol. As always, remember to save the configuration using the GUI or the save config command in CLI.

Note

Make sure you are running build 120 of NetScaler or higher before setting up TFTP load balancing. If you have build 118 or 119, make sure that you do not create a load-balanced TFTP server before setting up high-availability, or else NetScaler will crash. This is a known issue that has been fixed in build 120. You can read more about it at http://support.citrix.com/proddocs/topic/ns-rn-main-release-10-1-map/ns-rn-known-issues-10-1-118-x-con.html.

Load balancing SharePoint 2013

SharePoint has become quite a complex product in its latest releases, from starting out as a portal solution to becoming a complete collaboration platform for businesses. SharePoint can be seen as a web application, and it primarily uses HTTP and HTTPS protocols for delivering content to the users. With SharePoint 2013, there have also been some changes in how it operates. For example, Microsoft has introduced a new distributed cache system, which allows a frontend web server to store a login token in memory. This token is also available for other frontend web servers in the farm. This means that we do not need to set up persistency, as all of the authentication tokens are stored in the cache of the web servers. Also, SharePoint 2013 supports SSL offloading, which means that we can use NetScaler to handle SSL traffic and thereby reduce the load on the SharePoint servers by allowing them to respond only on HTTP.

Lastly, as SharePoint has a conception of what is seen as an internal and a public URL (known as alternate access mappings), we need to configure this as well when we set up load balancing so that SharePoint knows that we have set up a new public URL for the site. We start by adding the frontend SharePoint web servers to the server list by going to Traffic Management | Load Balancing | Servers. Next, we need to create a service or service group, where we add the servers and bind them to port 80 and protocol HTTP. Then, we add an HTTP monitor to the services, and click on Create.

Note

There is also a way to create a custom monitor that actually monitors if a user can authenticate on the SharePoint site. You can read more about this at http://support.citrix.com/article/CTX126201.

Now, we need to create a load-balanced service. Here, we need to bind the services we created earlier, and assign a name, IP address, protocol (SSL), and port (443). We also need to bind a certificate, which can be done in the SSL Settings pane.

After we are done creating the load-balanced service, we need to make some changes in SharePoint. We have to configure public URLs under Farm Settings | Alternate Access Mappings in SharePoint. You can read more about this athttp://technet.microsoft.com/en-us/library/cc263208.aspx. Now, we have successfully created a load-balanced SharePoint service.

One thing that is important to note is that NetScaler has a feature called AppExpert. AppExpert allows for simplified deployment of service. This includes load balancing rules, caching rules, compression, redirect, and so on. Citrix has created a template for SharePoint 2013 that we can use with AppExpert. The template can be found at http://www.citrix.com/static/appexpert/appexpert-template.html.

An important point to note is that AppExpert templates are based upon default settings from Citrix and require tuning before being used in a production environment. This feature is available under the AppExpert pane. From there, we need to import the template by clicking on Import AppExpert Template, which we have downloaded from the URL mentioned earlier. So, we have to import the deployment and the template file, and then give it an application name. By default, it is called SharePoint 2013. Next, go to AppExpert |Applications. It will list the application name that was created earlier and also the different services that make up the SharePoint 2013 site, as shown in the following screenshot:

Load balancing SharePoint 2013

The different services are listed as a subgroup of the application. We can define public endpoint for each of the services or for the whole application. The same goes for backend services. We can define which backend servers should have each of the services. For larger deployments, we will have the different services scattered across different backend servers; however, for a small deployment, it might be the same. So, all we need to do is create a public endpoint on the SharePoint 2013 application level (which is going to create a load-balanced VIP server), and configure backend services, which bind the load-balanced service to the backend servers. The only thing we need to do first is add the backend servers to the server list.

Load balancing Exchange 2013

Exchange has always been difficult to load balance because of the way it works, but with the release of Exchange 2013, it has become a lot easier to load balance as the architecture in Exchange 2013 has been dramatically simplified with only two roles, the Client Access server (CAS) and the Mailbox server. The CAS now only serves as a stateless proxy to the Mailbox server. This means that we can load balance on layer 4 as it does not matter which CAS a user is sent to. Also, RPC has been removed as a protocol and now HTTPS is used by default with Outlook Anywhere, which makes it a lot easier to load balance. When configuring Exchange, we need to set up CAS using an external URL, which is available only in NetScaler.

Note

An important point to note is that Exchange 2013 does not support SSL offloading, even with the latest release of CU3 and service pack 1. Microsoft has not stated that this would change or be added. Even though it places an extra load on the Exchange servers, they still benefit from NetScaler's ability to do SSL session multiplexing and health checking.

Now, there are multiple features and protocols that we can load balance using NetScaler. They are listed as follows:

· Outlook Web Access (OWA)

· Outlook Anywhere

· ActiveSync

· IMAP4

OWA, Outlook Anywhere, and ActiveSync all use the same port, and can be load balanced using the same vServer. The only difference is that they are available on different URL paths. First, we need to add the servers that are running as CAS to the list of servers. Next, we need to create a service or service group, which we will bind to the server on port 443 and protocol HTTPS. After we have chosen HTTPS as a protocol for the service, the SSL Settings pane will become active, and there we need to add the digital signed certificate that is attached to CAS. This can be done by going to Traffic Management | SSL. From there, we can import the certificate, and then install it for the service.

Note

The purpose of the certificate is to ensure that NetScaler can enable a complete connection to the OWA server backend as the use of certificates requires that both parties have a trusted root certificate in place in order to trust the connection.

Next, we need to create a vServer to set up a load-balanced service. Then, we need to bind it to a virtual IP address, port (443), and protocol (SSL), and bind a new certificate to the vServer. Under Method and Persistence, we choose Least Connection andCOOKIEINSERT respectively and a timeout of 2 minutes, and then click on Create. Also, it is important to set the external domain URL in CAS. This needs to be set from the exchange management console, which you can read more about athttp://technet.microsoft.com/en-us/library/jj218640%28v=exchg.150%29.aspx.

The external domain URL in the Exchange management console needs to point to the VIP address of the load-balanced service we created.

IMAP

IMAP is also a protocol that is commonly used in conjunction with Exchange, even though it does not provide many of the same features, such as calendar and public folders. IMAP is primarily used by a client to access e-mail on an Exchange server. Note that IMAP is not enabled by default on Exchange 2013. If you want to use this feature, you can read more about it at http://technet.microsoft.com/en-us/library/bb124489(v=exchg.150).aspx.

IMAP primarily uses two ports, TCP 143 for non-secure connections and TCP 993 for secure connections. Again, if we already have CAS on the server list, we do not need to add them again. If they are not added, add them to the list. Before we set up a service, we need to create a custom monitor. Go to Traffic Management | Load Balancing | Monitors, and click on Add. Here, we need to enter a name, define an interval of 30 seconds, and define port 143 as the destination port. As type, choose TCP-ECV and then go to theSpecial Parameters pane. Here, we need to type The Microsoft Exchange IMAP4 service is ready as the received string. This monitor queries CAS on that particular port and expects the text in response. Next, we need to create a service or service group. Add CAS to the list and bind them to the service, using protocol TCP and port 143. Then, bind the custom-made monitor we just created.

Now, we need to create a vServer. To this, we bind the service we created earlier, protocol SSL_TCP, port 993, and define a virtual IP address. Then, we need to add a digital certificate in the SSL Settings pane of the vServer to ensure that clients can use the IMAP service securely.

Note

As we have seen in the SharePoint part, Citrix has the AppExpert feature, which simplifies deployment of a service and configures optimization such as caching and redirection. As of now, this is only available for Exchange 2010 but stay tuned onhttp://www.citrix.com/static/appexpert/appexpert-template.html for newer releases of Exchange 2013.

Load balancing MSSQL

NetScaler is the only certified load balancer that can load balance the MySQL and MSSQL services. It can be quite complex and there are many requirements that need to be in place in order to set up a proper load-balanced SQL server.

Let us go through how to set up a load-balanced Microsoft SQL Server running on 2008 R2. Now, it is important to remember that using load balancing between the end clients and SQL Server requires that the databases on the SQL server are synchronized. This is to ensure that the content that the user is requesting is available on all the backend servers. Microsoft SQL Server supports different types of availability solutions, such as replication. You can read more about it at http://technet.microsoft.com/en-us/library/ms152565(v=sql.105).aspx. Using transactional replication is recommended, as this replicates changes to different SQL servers as they occur.

Note

As of now, the load balancing solution for MSSQL, also called DataStream, supports only certain versions of SQL Server. They can be viewed at http://support.citrix.com/proddocs/topic/netscaler-traffic-management-10-map/ns-dbproxy-reference-protocol-con.html. Also, only certain authentication methods are supported. As of now, only SQL authentication is supported for MSSQL.

The steps to set up load balancing are as follows:

1. We need to add the backend SQL servers to the list of servers.

2. Next, we need to create a custom monitor that we are going to use against the backend servers.

3. Before we create the monitor, we can create a custom database within SQL Server that NetScaler can query.

4. Open Object Explorer in the SQL Management Studio, and right-click on the Database folder. Then, select New Database, as shown in the following screenshot:

Load balancing MSSQL

5. We can name it ns and leave the rest at their default values, and then click on OK. After that is done, go to the Database folder in Object Explorer.

6. Then, right-click on Tables, and click on Create New Table. Here, we need to enter a column name (for example, test), and choose nchar(10) as the data type. Then, click on Save Table and we are presented with a dialog box, which gives us the option to change the table name. Here, we can type test again.

7. We have now created a database called ns with a table called test, which contains a column also called test. This is an empty database that NetScaler will query to verify connectivity to the SQL server.

Now, we can go back to NetScaler and continue with the set up. First, we need to add a DBA user. This can be done by going to System | User Administration | Database Users, and clicking on Add. Here, we need to enter a username and password for a SQL user who is allowed to log in and query the database.

After that is done, we can create a monitor. Go to Traffic Management | Load Balancing | Monitors, and click on Add. As the type, choose MSSQL-ECV, and then go to the Special Parameters pane.

Here, we need to enter the following information:

· Database: This is ns in this example.

· Query: This is a SQL query, which is run against the database. In our example, we type select * from test.

· User Name: Here we need to enter the name of the DBA user we created earlier. In my case, it is sa.

· Rule: Here, we enter an expression that defines how NetScaler will verify whether the SQL server is up or not. In our example, it is MSSQL.RES.ATLEAST_ROWS_COUNT(0), which means that when NetScaler runs the query against the database, it should return zero rows from that table.

· Protocol Version: Here, we need to choose the one that works with the SQL Server version we are running. In my case, I have SQL Server 2012.

So, the monitor now looks like the following screenshot:

Load balancing MSSQL

It is important that the database we created earlier should be created on all the SQL servers we are going to load balance using NetScaler. So, now that we are done with the monitor, we can bind it to a service. When setting up the services against the SQL servers, remember to choose MSSQL as the protocol and 1433 as the port, and then bind the custom-made monitor to it. After that, we need to create a virtual load-balanced service. An important point to note here is that we choose MSSQL as the protocol and use the same port nr as we used before 1433.

We can use NetScaler to proxy connections between different versions of SQL Server. As our backend servers are not set up to connect the SQL 2012 version, we can present the vServer as a 2008 server. For example, if we have an application that runs on SQL Server 2008, we can make some custom changes to the vServer. To create the load-balanced vServer, go to Advanced | MSSQL | Server Options. Here, we can choose different versions, as shown in the following screenshot:

Load balancing MSSQL

After we are done with the creation of the vServer, we can test it by opening a connection using the SQL Management Server to the VIP address. We can verify whether the connection is load balancing properly by running the following CLI command:

Stat lb vserver nameofvserver

Summary

We have now gone through the different load-balanced services, such as a generic web application, different Citrix components, Exchange, SharePoint, and SQL Server using the DataStream service. Load balancing is essential for many businesses, and it is important to understand how NetScaler can load balance between different backend servers.

In the next chapter, we will go through how to use compression and caching to improve the performance of web services and websites.