Exam Ref 70-413 Designing and Implementing a Server Infrastructure, Second Edition (2014)
Chapter 2. Design and implement network infrastructure services
Certain services have been a part of computer networks for decades and are largely unchanged. The intent of having computers networked is to facilitate communication between them, which is impossible without a standard set of services making this communication possible. This chapter discusses two of these services: The Dynamic Host Configuration Protocol (DHCP) provides IP address and basic configuration information to clients on a network; the Domain Name System (DNS) provides a way for computers to refer to each other using names that are more human-friendly. You’ll also take a look at some alternatives to these services and some scenarios in which they might not be entirely necessary. Finally, you’ll look at the new IP address management solution provided in Windows Server 2012, a tool that has the potential to ease the headaches felt by system administrators for years.
Objectives in this chapter:
Objective 2.1: Design and maintain a Dynamic Host Configuration Protocol (DHCP) solution
Objective 2.2: Design a name resolution solution strategy
Objective 2.3: Design and manage an IP address management solution
Objective 2.1: Design and maintain a Dynamic Host Configuration Protocol (DHCP) solution
The Dynamic Host Configuration Protocol (DHCP) has two primary roles in a network: to provide IP addressing information and basic network configuration settings. A DHCP server typically provides (at a minimum) an IP address and subnet mask for the client device, the IP address of the router or gateway, and one or more IP addresses for Domain Name System (DNS) servers that will be used to perform name resolution.
The process of designing a DHCP solution involves many factors. The overall network design is a driving force in DHCP server placement, as well as the configuration applied to each server. DHCP broadcast requests do not traverse network routers by default, so DHCP relay agents are often used in large networks to assist in routing these DHCP requests to a DHCP server that can respond. DHCP scopes are also typically driven by the overall network configuration. Not only do scopes provide the pools of IP addresses but they also can be used to provide configuration information for clients on a specific network segment using DHCP options such as the subnet mask to be used or the IP address of the router for that network segment.
In addition to the network, the services used within a network should also be considered when designing a DHCP solution. Most networks use DNS for name resolution, and DHCP options provide a simple method of providing DNS server addresses to clients. Some network services such as Windows Deployment Services (WDS) actually require a DHCP server be available to provide IP addresses and configuration information to network clients.
Windows Server 2012 offers several tools with which to make your DHCP infrastructure design more flexible. DHCP options are used to provide configuration such as IP addresses for routers and DNS servers. A key part of your DHCP design is understanding where and how these options should be applied. For example, the router IP address is typically determined by subnet and can often be easily applied to a DHCP scope. On the other hand, DNS servers are often the same across an enterprise, making them good candidates to be configured at the server level. A new feature in Windows Server 2012, DHCP policies, are even more of an improvement. Using conditions based on the client’s MAC address, vendor class, or even fully qualified domain name (FQDN), DHCP options can be applied to clients meeting the conditions of the policy.
Because DHCP is so critical to almost every aspect of your corporate network, it is essential to ensure high availability for DHCP services throughout your network. Several high-availability options are provided to DHCP server administrators in Windows Server 2012, and each has scenarios in which it can be the best solution. Choosing the appropriate high-availability method for different network scenarios and use cases is a critical aspect of designing a complete DHCP solution.
This objective covers how to:
Design a highly available DHCP service and planning for interoperability
Implement DHCP filtering
Implement and configure a DHCP management pack
Maintain a DHCP database
Designing a highly available DHCP service
Due to the nature of DHCP, it is important to have the service available to the network at all times. In addition to ensuring that client computers can reach servers on the network to function properly, many network services actually rely heavily on the configuration information provided by DHCP servers on the network. Windows Server 2012 R2 provides several methods of ensuring that your DHCP services are highly available.
Because DHCP clients use broadcast traffic to find a DHCP server and request an IP address, you can’t simply place two DHCP servers on the same network segment because they can provide conflicting information, causing significant problems for DHCP clients. An additional concern is that DHCP broadcast requests travel only as far as the local network router by default, requiring the use of a DHCP relay agent in order to pass DHCP requests on to the appropriate DHCP server. Any high-availability solution needs to take these networking intricacies into account.
Split scope
The oldest high-availability trick in the DHCP book is to use split scopes. DHCP scopes are used to provide a specific range of IP addresses for a portion of your network as well as the configuration information needed for that network segment. A split scope is typically configured with identical scopes on both servers, but the scope is split by using opposing exclusion ranges to prevent the two servers from offering the same IP address to a client. By splitting the scope in two and configuring two separate DHCP servers to handle a portion of the scope, you can better ensure that a DHCP server is always available to your network.
A typical split-scope deployment is configured with a primary server that provides 80 percent of the address space and a secondary server responsible for the remaining 20 percent. An optional delay can also be configured on the secondary server to ensure that the primary server handles the majority of the address leases. Although split-scope configuration used to be a manual process, Windows Server 2012 provides a wizard-based means to configure a split-scope deployment (see Figure 2-1).
FIGURE 2-1 DHCP split scopes are the de facto standard for high availability
As with many aspects of life, there are pros and cons to using a split-scope implementation. A benefit of using this approach is the fact that legacy DHCP servers can participate in a split-scope implementation, although they might require manual configuration. The key disadvantage of split-scope implementation is that each DHCP server is capable of serving only a portion of your DHCP scope. If a DHCP server becomes unavailable, only a percentage of the scope is available for use until the server is reconfigured or the other DHCP server is restored to service. A key differentiator between a split-scope configuration and other high-availability options is that the two servers do not share or synchronize IP address leases, configuration information, or the DHCP database.
DHCP failover clustering
Another method of achieving the high availability typically found with DHCP services is failover clustering. Failover clusters require a minimum of two servers, known as cluster nodes, and can support up to 64 nodes. Failover clusters use shared storage to access and maintain the same DHCP database. A basic failover cluster configuration is shown in Figure 2-2.
FIGURE 2-2 Failover clusters require shared storage in order to maintain a consistent DHCP database
A failover cluster composed of multiple cluster nodes appears to network clients as a single host even if one node becomes unavailable, making it a good candidate for high availability. Although a failover cluster has the potential to provide a high level of reliability, it has significant hardware requirements for DHCP servers. Cluster nodes must be able to simultaneously access shared storage using Serial Attached SCSI, Fibre Channel, or iSCSI. A potential weakness of failover clusters is having the shared storage being a single point of failure. Hardware compatibility can be verified using the Validate Configuration Wizard.
Failover clustering is installed as a feature in Windows Server 2012 R2. After the feature is installed, the DHCP role must be added to the cluster using the Failover Cluster Manager. This process involves configuring the hostname and IP address for the cluster, as well as selecting the storage device. Additional nodes can be added to the failover cluster using the Add Node Wizard in the Failover Cluster Manager. A high-level overview of the steps to implement a DHCP failover cluster is as follows:
1. Configure appropriate networking and shared storage between servers.
2. Install the Failover Clustering feature on each server by using the Add Roles and Features Wizard.
3. Create the cluster using the Create Cluster Wizard.
4. Deploy and configure the DHCP role within the failover cluster.
DHCP failover
Windows Server 2012 introduced a new option for DHCP high availability in DHCP failover, which creates a partnership between two DHCP servers, enabling them to share the responsibility for a scope. It is important to understand the distinction between DHCP failover and failover clustering. Although using DHCP in a failover cluster requires a cluster first be deployed, DHCP failover is a function of the DHCP service. The two servers synchronize their databases and recognize the availability of their partner. Although DHCP failover partnerships are created between two servers, they are on a scope-by-scope basis, meaning that one DHCP server can have different DHCP failover partnerships for each scope it hosts.
When planning for reliability in your DHCP service, keep in mind the networking limitations of the DHCP. Because DHCP broadcasts do not traverse routers by default, it is important to plan for these broadcasts to reach a secondary server if the primary is unavailable. If your secondary server is located in another subnet, it might be necessary to configure DHCP relay even if your primary server is local to your clients.
DHCP failover supports two modes: load balancing mode and hot standby mode. The choice between the two modes is made during completion of the Configure Failover Wizard, as shown in Figure 2-3.
FIGURE 2-3 Configuring DHCP failover in load balancing mode
Load Balancing Mode
Load balancing mode enables you to divide the workload between the two servers, either splitting the load evenly or prioritizing one server over another. Unlike a split-scope configuration, however, each server can provide the full range of IP addresses if the partner server is unavailable.Figure 2-4 illustrates two DHCP servers operating in load balancing mode, distributing the workload evenly between the two servers. The diagram illustrates the two DHCP servers alternating the responsibility of responding to DHCP requests. If desired, the load can be balanced unevenly, using a weighted load to have one server provide an increased percentage of the workload.
FIGURE 2-4 Two DHCP servers in a failover relationship operating in load balancing mode
Hot Standby Mode
Hot standby mode enables a secondary server to take over if the primary DHCP server is unresponsive for a predetermined amount of time. Because the necessary database changes are synchronized and not centrally stored, DHCP failover in hot standby mode is a good fit for a standby server located off-site. It can be beneficial for organizations in which having multiple servers at each location is difficult due to budget constraints or management workload.
In hot standby mode, the primary server is configured to accept all incoming DHCP requests, whereas the partner server is configured with a slight delay and reserves a percentage of the IP address pool for itself. If a DHCP lease is provided by the standby server, as shown in Figure 2-5, the lease duration is set based on the Maximum Client Lead Time (MCLT), which can be configured during the creation of the failover partnership.
FIGURE 2-5 DHCP failover in hot standby mode
Another function of the MCLT is to define how long the standby server should wait to take on the primary role after losing communication with the primary server. When the standby server loses communication with its partner, it enters a communication interrupted state. If communication has not been reestablished by the end of the MCLT duration, it remains in this state until manually changed to the partner down state by an Administrator. Alternatively, the failover partnership can automatically switch to the partner down state based on the value of the State Switchover Interval. A hot standby server cannot offer the DHCP scope’s full IP address pool to clients until it enters the partner down state.
Exam Tip
Your knowledge of new features is typically assessed often throughout the exams, so you should be intimately familiar with DHCP failover. Specifically, you should know how to configure it, how it functions when one server becomes unavailable, and what types of scenarios are best for different configurations.
DHCP interoperability
At the top of this section, we mentioned how critical DHCP is to your entire infrastructure. This is especially true when viewed through the lens of DHCP interoperability, or how DHCP interacts with other services in your network. The simple process of connecting a computer to the network and authenticating to Active Directory depends largely on DHCP providing the IP address and DNS information for the client to contact a domain controller. A key aspect of planning and designing a DHCP implementation is to understand what configuration options must be in place and where they should be configured.
In a modern network, particularly one using Active Directory, DHCP and DNS go hand in hand. Often, DHCP clients receive their DNS server configuration through DHCP options, listing one or more DNS servers. Your DHCP server can also be used to perform dynamic updates of DNS records, keeping name resolution to clients consistent and up to date. DHCP–based DNS dynamic updates are just one method of keeping client resource records (RRs) updated and are typically used with legacy clients. Computers using Windows Vista, Windows Server 2008, or a later operating system perform dynamic updates using the DNS client and DHCP client services. DNS dynamic updates are configured in the properties for the IP type node, as shown in Figure 2-6.
FIGURE 2-6 Configuring DNS dynamic updates from the DHCP server
Two new features in Windows Server 2012 R2 relate specifically to DHCP interoperability with DNS. Starting with Windows Server 2012 R2, you can configure DHCP to create or update only DNS A records for clients instead of both A records and pointer (PTR) records. The other new capability in Windows Server 2012 R2 is the ability to create a DHCP policy condition based on the FQDN of the DHCP client. It can be used to register DHCP clients using a specific DNS suffix or even configure DNS registration for guest devices such as workgroup computers or bring-your-own-device (BYOD) scenarios.
Another service that can be implemented in conjunction with DHCP is Network Access Protection (NAP), which is a service used to constrain or manage a client’s network connectivity based on specific criteria such as the client’s health, method of connectivity, or type of hardware. A critical aspect of NAP is the need to evaluate the client computer prior to full network connectivity being established, making DHCP one service that can be used as an enforcement method. Although DHCP is a valid NAP enforcement method, it is by far the weakest because it can be bypassed by simply configuring a static address. Although NAP is officially deprecated in Windows Server 2012 R2, it is still part of this exam. (We’ll delve into NAP more in Chapter 3.) Enforcement of NAP can be configured for individual scopes or enabled on all scopes, as shown inFigure 2-7.
FIGURE 2-7 Enabling NAP enforcement using DHCP
DHCPv6
By this time, you are probably aware of the limitations of IPv4 due to network size and specifically the explosion of the Internet. You also probably know that IPv6 is the solution for the IP address crunch, at least for the foreseeable future. The IPv6 standard supports client autoconfiguration, so IPv6 clients are capable of generating and configuring themselves to use their own unique IPv6 addresses. DHCP for IPv6 (DHCPv6) can operate in both stateful and stateless modes, in line with the IPv6 standard.
Stateless DHCPv6 allows clients to use IPv6 autoconfiguration to obtain the IPv6 address, but provides additional configuration information such as DNS servers. Stateless mode requires minimal configuration, making it an especially good fit for organizations not yet fully invested in IPv6.
A stateful DHCPv6 implementation provides both the IPv6 address and additional configuration to clients. As with DHCP for IPv4, DHCPv6 requires configuration of scopes in order to offer IP addresses to DHCP clients; however, the scope configuration process differs somewhat from its counterpart. Table 2-1 lists the options found in the New Scope Wizard for a DHCPv6 scope.
TABLE 2-1 Configuration options for DHCPv6 scopes
In addition to offering a vast number of IP addresses, IPv6 has several other differences from IPv4. The DHCPv6 request process differs significantly from IPv4 communication between client and server. Table 2-2 lists the communication steps involved in DHCPv6 and their IPv4 equivalents.
TABLE 2-2 DHCPv6 message types
Implementing DHCP filtering
DHCP filtering enables you to exert more fine-grained control over which clients receive IP addresses from your DHCP server. Although DHCP filtering might be unwieldy for an entire enterprise network, it can be very useful for smaller, more secure segments of your network.
DHCP filters can be created by using either an Allow or Deny rule, and are configured using a full MAC address to target a specific client or a partial MAC address with wildcards to target network adapters from a specific vendor. Filters can be created at the server or scope level, and they must be enabled before use. Some examples of valid MAC addresses include these:
C4-85-08-26-54-A8
C4-85-08-26-54-*
C4-85-08-26-*-*
One example using DHCP filtering is in a datacenter environment. Although many servers have static IP address assignments by design, this is not always the case. You can also have workstations within your datacenter network for management purposes, but you probably want to limit the computers that can connect on this network. DHCP filtering can be part of the solution to limiting the computers that can gain access to your datacenter network. Creating a DHCP filter can be included as part of the provisioning process for new servers or workstations.
A new feature in Windows Server 2012, DHCP Policies, takes the concept of filtering a step further. Where filters can be used to only allow or deny a DHCP lease, policies allow you to configure specific DHCP options based on a client’s MAC address, user class, vendor class, or several other criteria. Windows Server 2012 R2 even allows you to create a policy condition based on a client’s FQDN.
When DHCP policies are incorporated into your DHCP design plan, you open up capabilities such as registering DNS records for non-domain DHCP clients. In this scenario, you would create a condition in which the client’s FQDN did not match that of your domain and then configure DHCP option 015 (DNS Domain Name) with the DNS suffix used to register non-domain devices.
Exam Tip
DHCP policies are not mentioned specifically in the exam objectives, but the feature is entirely new in Windows Server 2012, and the ability to use a client’s FQDN was added in Windows Server 2012 R2. Be prepared to answer questions on this topic.
Implementing and configuring a DHCP Management Pack
Even with the configuration and management tools provided by default in a Windows Server 2012 R2 install, Microsoft offers an additional level of control when used with Microsoft System Center 2012. This holds true with DHCP because Microsoft offers the DHCP Management Pack as part of Microsoft System Center 2012 Operations Manager. The DHCP Management Pack enables you to centrally monitor and manage several key aspects of DHCP servers throughout your network, including the following:
DHCP server health
DHCP failover status
DHCP database health
DHCP security health
DHCP performance
DHCP configuration changes
The DHCP Management Pack does require Microsoft System Center 2012 and specifically targets DHCP servers running Windows Server 2012 or later. Additional management packs are available for use with DHCP servers running older versions of Windows Server.
Maintaining a DHCP database
Windows Server 2012 R2 DHCP servers store configuration and lease information in a database. The location of this database can be configured in the DHCP server properties, as shown in Figure 2-8. The DHCP server’s database can be manually backed up or restored through the Actions context menu. An automated backup runs every 60 minutes by default. The location of this backup can also be configured in the DHCP server properties, and the interval can be modified by editing the BackupInterval value in the registry key found in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\DHCPServer\Parameters.
FIGURE 2-8 Configuring DHCP database and backup locations
It might be necessary to periodically resolve mismatches in the DHCP server database. You can do this by choosing Reconcile All Scopes at the IPv4 or IPv6 level, or Reconcile At The Scope Level in the DHCP Management console.
Thought experiment: Planning a highly available DHCP implementation
In this thought experiment, apply what you’ve learned about this objective. You can find answers to these questions in the “Answers” section at the end of this chapter.
You are being consulted on updating a large corporation’s DHCP implementation. This organization has a corporate headquarters with a centralized datacenter and multiple branch offices. Each branch office has a single server operating as a DHCP server, DNS server, and domain controller. The corporation’s IT department is located entirely at its headquarters and it needs to be able to efficiently monitor and manage DHCP services throughout the company.
An additional corporate requirement is to ready its network for IPv6. The IT department wants to minimize the configuration workload for its IPv6 implementation for the time being.
Finally, your client wants to allow guest devices on the network, but needs to be able to distinguish these guest devices from those that are corporate-owned in DNS.
Answer the following questions based on this scenario:
1. What DHCP high-availability technique is the best fit for this company? What level of service would be provided by servers at each branch? At headquarters?
2. How would you meet the requirement to efficiently monitor and manage DHCP throughout the company? What other requirements must be in place to implement this solution?
3. What type of IPv6 addressing would honor the IT department’s request to minimize configuration workload?
4. Is there a capability within DHCP to control how DHCP clients are registered in DNS? If so, what is it?
Objective summary
Split scopes are the traditional method of providing high availability, but they limit the IP address pool when one DHCP server is unavailable.
Using DHCP in a failover cluster enables you to distribute your DHCP load across two or more servers, but requires shared storage and adds complexity to your deployment.
DHCP failover is a new feature in Windows Server 2012 that allows you to partner two DHCP servers that then synchronize their DHCP databases to provide high availability.
DHCP failover in load sharing mode actively shares the DHCP workload between two servers, although hot standby mode allows one DHCP server to serve DHCP clients with a second server used only when the first is unavailable.
DHCP interoperates with several other critical network services, including DNS, enabling the automatic creation or update of DNS records when a DHCP lease is renewed. In Windows Server 2012 R2, this capability is more flexible, allowing you to limit these updates to only A records rather than to A and PTR records.
New DHCP policy capabilities in Windows Server 2012 R2 enable you to create policies based on a client’s FQDN, allowing you to dynamically modify the registered DNS suffix or handle DNS registration for workgroup computers or guest devices.
DHCP filtering provides a means to restrict what computers are allowed to query the DHCP server by pattern matching the client’s MAC address.
The DHCP Management Pack for Microsoft System Center 2012 allows various aspects of DHCP to be managed and monitored at an enterprise-wide level.
Objective review
Answer the following questions to test your knowledge of the information in this objective. You can find the answers to these questions and explanations of why each answer choice is correct or incorrect in the “Answers” section at the end of this chapter.
1. DHCP failover supports two modes. What is the name of the mode that configures both partner servers to share the responsibility for responding to DHCP clients with the entire pool of addresses?
A. Split scope
B. Hot standby
C. Load balancing
D. Failover cluster
2. What is the maximum number of nodes that can be used in a DHCP failover cluster?
A. 2
B. 6
C. 10
D. 64
3. What is the recommended high-availability method for an organization with multiple branch offices and a central datacenter?
A. Split scope
B. DHCP failover in hot standby mode
C. DHCP failover cluster
D. DHCP failover in load balancing mode
4. What duration is used when a DHCP client obtains a lease from a hot standby server?
A. The lease duration for the scope
B. The lease duration for the DHCP server
C. Until the primary DHCP server returns to service
D. The value of the Maximum Client Lead Time
5. Which of these services supports interoperability with DHCP but is not a recommended solution?
A. DNS
B. Active Directory
C. VPN
D. NAP
6. How would you use DHCPv6 to provide only network configuration information to a client, not an IP address itself?
A. DHCP filtering
B. Stateless DHCP
C. DHCP options
D. Stateful DHCP
7. How frequently is the DHCP database backed up by default?
A. 30 minutes
B. 60 minutes
C. 12 hours
D. 24 hours
Objective 2.2: Design a name resolution solution strategy
DNS has the potential to be the single most critical aspect of your network. Resolving names to IP addresses is critical for network services and human usability, particularly in an industry moving to IPv6. Even beyond name resolution, DNS is used to direct computers to critical services such as email or Active Directory.
One significant area of focus for DNS recently is security. Due to the nature of the service, it can be catastrophic if a malicious user can register a DNS record that could then be used to direct traffic to a server under the user’s control. Windows Server 2012 R2 takes some important steps in securing your DNS servers and ensuring that names are registered and resolved correctly and securely. Knowing what attack vectors are likely to be used to target your DNS infrastructure and which features in DNS can be used to mitigate potential areas of risk are probably the most critical aspects of designing a DNS deployment. The conceivable damage from an attacker being able to manipulate DNS is boundless because almost any network traffic can be diverted if the name resolution process becomes compromised.
Although not the only security feature provided for DNS, Domain Name System Security (DNSSEC) is a huge part of validating name resolution. Although DNSSEC was available in Windows Server 2008 R2, it was not until Windows Server 2012 that DNS zones could be signed while online. Another factor to be considered in the design phase of your DNS infrastructure is the performance overhead DNSSEC requires. A DNSSEC signed zone has four times the number of RRs of an unsigned zone. For file-backed DNS zones, this means four times the disk space is used by DNS; Active Directory–integrated zones have to plan for a significant increase in memory consumption. Similarly, DNS queries will result in an increase of network traffic, as RRs are queried and validated.
When the DNS structure is designed for an organization, the makeup of that structure is obviously an aspect that should be carefully planned. The structure of DNS is created using zones and zone delegations. Understanding the name resolution process and how other services such as Active Directory integrate with DNS will be driving forces in your DNS infrastructure design.
This objective covers how to:
Configure secure name resolution using DNSSEC
Support DNS interoperability
Manage DNS replication using application partitions
Provide name resolution for IPv6
Support single-label DNS name resolution
Design a DNS zone hierarchy
Configuring secure name resolution
Because name resolution is such a critical aspect of your network infrastructure, it is critical to ensure that all aspects are secure and can be trusted. Windows Server 2012 R2 includes tools to secure your DNS servers, protecting them from false DNS registrations. You also have the ability to use DNSSEC to add a validation process to DNS queries, preventing a malicious user from spoofing a response to a DNS client.
DNS infrastructure security
A large part of making your DNS infrastructure secure is designing your infrastructure and topology with security in mind. Features such as Active Directory–integrated DNS zones improve replication performance and security as well as allow for secure dynamic updates. Using file-backed DNS zones requires you to choose between completely disabling dynamic updates or leaving your DNS infrastructure open to significant attack. Likewise, limiting zone transfers to only known hosts should be one of the first things configured in your DNS zones.
Another aspect of your DNS infrastructure that needs planning is the support of DNS queries from both internal and external sources. Most organizations do not need every host on their network to be available to the Internet, so you should not allow their names to be resolved by Internet hosts. There are multiple ways of segregating your DNS infrastructure in this way. First, you can have a separate DNS structure for internal and external queries. Often a DNS server providing responses to the Internet has its RRs configured manually, thus preventing false resource records from being created in your zone. Another method of segregating your DNS is to have separate namespaces for your internal and Internet-facing hosts, such as contoso.local and contoso.com, respectively.
Outgoing DNS queries are another area to design for security. If every DNS server in your network is allowed to directly perform Internet DNS queries, this could unnecessarily expose these servers to attack. A better practice is to designate one or more DNS servers to perform external queries and configure the remainder of your DNS servers to refer external queries to these servers, either through forwarding or root hints.
Domain Name System Security (DNSSEC)
DNSSEC is a set of Internet Engineering Task Force (IETF) standards that adds a validation process to DNS queries using digital signatures. Digital signatures are used in DNSSEC to sign individual records as well as relationships up and down the hierarchy, providing a method to ensure a chain of trust throughout a recursive DNS query.
DNSSEC requires DNS servers and networking hardware that support EDNS0. It also requires the use of large DNS packets, which are enabled by EDNS0. In cases where networking hardware does not support EDNS0, DNS queries and validation might not occur properly.
A typical DNS query works recursively through several steps in a process. A typical recursive DNS query is shown here:
1. The DNS client begins the name resolution process by checking the local DNS cache, including names listed in the local Hosts file, which is preloaded into the cache when the DNS client service starts. The client cache also includes information from previous successful DNS name resolution queries. Cached RRs are retained for the duration of the Time To Live (TTL) value.
2. If the DNS query is not resolved in step 1, the client queries its preferred DNS server.
3. The client’s preferred DNS server attempts to match the queried domain name against the zones it hosts.
4. If the DNS query cannot be resolved using the DNS server’s own zones, the DNS server cache is searched for a match. These cached RRs are stored after successful name resolution queries and last for the duration of the RR’s TTL.
5. After the preferred DNS server determines that it cannot resolve the DNS query internally, it begins a recursive query through the DNS namespace. Beginning with the Internet root DNS servers, as defined in the root hints, the DNS server queries through the DNS namespace from top to bottom until it contacts a DNS server that can provide a response to the query.
6. After a response is provided to the recursive DNS server, it caches the response locally and provides a response to the DNS client.
7. After the DNS client receives its response, it also caches the response for future use.
If recursion is disabled on the DNS server or a DNS client does not request recursion, an iterative process can be used to resolve the DNS name. An iterative DNS query is quite similar to a recursive query, except the client performs each of the queries instead of using the preferred DNS server. When using an iterative query, the client receives a response containing the DNS server that matches the requested domain name most closely. Using this newly identified DNS server, the client continues to query through the DNS hierarchy until it resolves the domain name.
Several response types can be returned by DNS servers in response to queries, each with a different meaning to the querying client or server. An overview of the various DNS query responses is provided in Table 2-3.
TABLE 2-3 DNS query responses
Attack Methods
When a malicious user chooses DNS as a target, the goal is typically to compromise a DNS query to provide a false reference to a server under the attacker’s control. If successful in returning a false query to a recursive DNS server, the attacker controls not only what server this traffic gets directed to but also the TTL value of the cached record, resulting in continued traffic for a prolonged period of time.
One potential method for an attacker to compromise a DNS query is to intercept the DNS query from either the client or the DNS server. If able to get between the corporate DNS server performing recursive queries and the authoritative DNS server for the record being requested, the attacker could provide a response containing whatever values he or she wants. This type of attack, shown in Figure 2-9, is known as a man-in-the-middle (MITM) attack.
FIGURE 2-9 An MITM attack requires a malicious user to intercept DNS query traffic
A second form of DNS attack does not require the attacker to step into the process, but does require some luck or (more precisely) some time. In this scenario, the attacker does not attempt to intercept the DNS query; instead, the malicious user simply attempts to predict a query response and trick the recursive DNS server into accepting this response.
Figure 2-10 shows this type of attack, referred to as spoofing. The difficulty of a spoofing attack is that the malicious user has to not only correctly guess or predict the name and record type being requested but also the XID, a 16-bit random value that must match the DNS request, as well as the ephemeral port (a random port selected from the DNS socket pool) being used for the specific query. With enough time and resources, both the XID and the port used can potentially be compromised by a skilled attacker.
FIGURE 2-10 Using spoofing, a malicious user attempts to predict the information required for a DNS response while providing information used to direct network traffic to a server of the attacker’s choosing
In both scenarios, the attackers are attempting to perform cache poisoning. By inserting information of their design into the corporate DNS cache, they can direct Internet traffic to their own servers to exploit whatever information passes.
Secure Name Resolution
DNSSEC does not prevent an attacker from intercepting DNS traffic or attempting to spoof the response to DNS queries. Instead, DNSSEC-enabled queries allow the recursive DNS server and DNS client to validate that the responses they receive are valid and come from a trusted DNS server. This validation is enabled by using digital signatures to sign and validate zone delegations in order to create a chain of trust from the root domain down to individual corporate domains, as well as signatures for individual RRs within a DNSSEC signed zone. By validating these digital signatures at each step in the query process, a DNS server can ensure that the responses being received are correct and have not been provided by a malicious user.
DNSSEC capability is communicated and negotiated within the DNS query through the use of flags, or single bits of data within the query. DNS clients and servers indicate whether they are DNSSEC-aware and capable of returning signed responses through the use of these flags throughout the DNS query process. Table 2-4 lists the flags used to indicate different aspects of DNSSEC capability.
TABLE 2-4 DNS flags used with DNSSEC
A typical DNS query validated using DNSSEC would progress in the following way (see Figure 2-11):
1. A DNS client queries its preferred DNS server, indicating that it is DNSSEC-aware.
2. The corporate DNS server begins performing a recursive query, starting with the DNS root domain. It, too, indicates that it is DNSSEC-aware using the DO bit.
3. A DNS root server then replies to the query from the corporate DNS server, providing the DNS RRs for the requested child domain, the Resource Record Signature (RRSIG) record used to sign those records, and the DS record used to sign the relationship between the parent and child domains. The recursive DNS server queries one of the DNS servers in charge of the child domain, comparing the key in the DS record with that of the DNSKEY record in the child domain.
4. If the delegation is validated, the recursive DNS server continues the query process by querying the DNS server for the child domain. If the response includes another delegation, the validation process continues recursively through the DNS structure as shown in steps 3 and 4.
5. After a DNS server containing the requested RR is reached, the record is returned along with the corresponding RRSIG record. If the requested record name and type do not exist, the NSEC or NSEC3 and RRSIG record are returned, indicating a negative response.
6. Whether the response is positive or negative, the response is validated to ensure that the response is legitimate. If the DNS client is DNSSEC-aware, the response from the corporate DNS server includes the AD bit set to 1, indicating a validated response.
FIGURE 2-11 A typical DNS query uses recursion to resolve the DNS name
Note: Testing and Troubleshooting DNSSEC
Although Nslookup.exe has long been the standard tool for troubleshooting issues related to DNS, Nslookup is not a suitable tool for diagnosing problems with DNSSEC. Nslookup uses an internal DNS client, not the system DNS client. The Resolve-DnsName cmdlet with the –DnssecOk parameter is a better tool to troubleshoot DNSSEC signed name resolution.
Chain of Trust
Each DNSSEC signed zone has a trust anchor, which is designated by a DNSKEY record. These DNSKEY records contain the public key for either the zone signing key (ZSK) or key signing key (KSK) used to sign the zone. Parent domains refer to a delegation using a DS record, indicating that the child zone is DNSSEC-aware. These DS records are not automatically created either during the delegation creation process or at zone signing, but must be manually created to provide validation for the delegation. This DNSKEY-to-DS pairing creates the chain of trust that can be validated throughout the name resolution process, ensuring that a malicious user is not posing as a legitimate DNS server.
In a corporate DNS infrastructure, trust anchors must be distributed to each DNS server that provides DNS responses for a particular zone. When using Active Directory–integrated DNS zones, this process can be automated from the Trust Anchor tab of the DNSSEC properties for the zone. Windows Server 2012 also adds the Trust Points node in DNS Manager, allowing you to quickly determine where trust points are configured within your DNS structure.
Zone Signing
A DNS zone becomes capable of providing DNSSEC signed responses through the zone signing process. Although Windows Server 2008 R2 supports DNSSEC, it supports zone signing only while the zone is online. Windows Server 2012 and following support online signing of DNS zones. During the zone signing process, keys are created and then used to create signatures for each RR in the zone.
Each RR typically found in a DNS zone, such as A and CNAME records, has two additional RRs associated with it after the zone is signed. The RRSIG record type contains the digital signature used to validate the RR (see Figure 2-12). Either a Next Secure (NSEC) or Next Secure 3 (NSEC3) record is created to validate a negative response from a DNS server when a requested record does not exist. NSEC3 records have the additional benefit of preventing zone walking, by which an attacker could enumerate all the RRs in your zone. Additionally, this NSEC or NSEC3 record has an RRSIG record of its own. The net result is at least four times the number of RRs per DNSSEC signed zone, which could have an impact on performance because additional memory and storage capacity might be needed.
FIGURE 2-12 RRSIG RRs provide digital signatures for each A or AAAA RR
Table 2-5 describes the DNS RRs used by DNSSEC.
TABLE 2-5 DNS RRs introduced by DNSSEC
In Windows Server 2012, DNS zones can be signed by using the Zone Signing Wizard found under DNSSEC in the context menu for the zone (shown in Figure 2-13) or by using the Invoke-DnsServerZoneSign PowerShell cmdlet.
FIGURE 2-13 Completing the DNSSEC Zone Signing Wizard
Key Management
The key to DNSSEC, quite literally, are the keys used to sign RRs. For each DNSSEC signed zone, the Key Master is responsible for managing and maintaining the keys used for that zone. The Key Master role can be transferred by using either the Key Master tab in the DNSSEC properties window or the Reset-DnsServerZoneKeyMasterRole cmdlet. Transfer of the Key Master role is possible only with Active Directory–integrated zones.
Two keys are typically used for DNSSEC signed zones. The KSK is used to digitally sign the ZSK; the ZSK is used to sign the remaining RRs in the zone. From a DNSSEC standpoint, there is no real difference between the two keys; in fact, the same key can be for both purposes. The benefit afforded by using both key types has to do with balancing the need for frequent key rollover for security purposes and the need to minimize the administrative overhead involved with the key rollover process. The two key types can be managed in the DNSSEC properties for the zone using the KSK and ZSK tabs.
Key rollover is the process of changing the keys used to create digital signatures. Because this key is what protects the integrity of your DNS infrastructure, it is important to change it periodically, much like requiring users to change their passwords. The problem is that the existing keys have been used to sign RRs, which often have been replicated to other servers, published to a parent zone, or cached on a recursive DNS server. To prevent scenarios in which name resolution queries cannot validate correctly due to mismatched keys, there are two key rollover methods that allow you to change your DNSSEC keys without interfering with the name resolution process.
Double Signature rollover involves generating and using two keys rather than one. For the ZSK, it results in the size of the zone being doubled because each RR must be signed using each ZSK. For KSK rollover, only the corresponding DNSKEY record must be duplicated, and communication to the parent zone needs to happen only once to provide an updated DS record.
Key Pre-publication also creates two keys, but the process differs after that. When used for the ZSK, only the DNSKEY record is duplicated. After the key rollover is performed, the zone is signed using the second key; the original is retained until all cached signatures would have expired. In the case of the KSK, Key Pre-publication duplicates the DNSKEY and publishes the associated DS record to the parent zone. After the DS record has been fully propagated through the DNS servers for the parent zone, the original DS record is removed, and the new KSK is put into service. Table 2-6 enumerates the pros and cons of pairing each key rollover method with the two key types.
TABLE 2-6 Pros and cons of key rollover methods
Due to the requirements of the rollover processes for the two key types, it is recommended to use Key Pre-publication for the ZSK and Double Signature for the KSK. Automatic rollover for both key types can be enabled in the ZSK and KSK tabs in the DNSSEC properties for the zone. The automated key rollover process uses the recommended rollover method for each key type.
Name Resolution Policy Table
The Name Resolution Policy Table (NRPT) is used to configure the way DNS clients in Windows Vista, Windows Server 2008, and later perform their name resolution. Managed within Group Policy, the NRPT allows you to configure the DNS client based on domain names or wildcard matches. Using NRPT, you can configure the way computers in your organization use DNSSEC, when validation should be required, and even when IPsec should be used to encrypt communication between the DNS client and the DNS server.
Policies in the NRPT can be applied using several methods, which are listed in Table 2-7.
TABLE 2-7 Criteria for applying policies in the NRPT
As an example, your organization might want to perform DNSSEC-enabled queries for external DNS queries, but needs to force validation for internal name resolution. In this scenario, two entries are created in the NRPT, one for the contoso.com suffix, and the other for Any. Both policies are configured to use DNSSEC, but only the contoso.com policy requires name and address data validation.
The NRPT is shown in Figure 2-14 and can be found in the Group Policy Management Editor under Computer Configuration\Policies\Windows Settings\Name Resolution Policy. Additionally, the client NRPT can be viewed using the Get-DnsClientNrptPolicy cmdlet.
FIGURE 2-14 Configuring the NRPT
More Info: DNSSEC
For more information on DNSSEC, visit http://technet.microsoft.com/en-us/library/jj200221.aspx. For more information on new DNSSEC features in Windows Server 2012, visit http://technet.microsoft.com/en-us/library/dn305897.aspx#DNSSEC. For more information on changes to DNSSEC in Windows Server 2012 R2, visit http://technet.microsoft.com/en-us/library/dn305898.aspx#DNSSEC.
Exam Tip
DNSSEC has received some significant new capabilities in both Windows Server 2012 and Windows Server 2012 R2. Make sure you know what new features are available in each edition and have a good understanding of DNSSEC.
DNS socket pool
DNS servers use a range of ports known as the DNS socket pool to reply to DNS queries. Using a pool of addresses provides a level of protection against DNS spoofing attacks by requiring a malicious user to correctly guess the randomly selected response port in addition to a random transaction ID. Increasing the size of your DNS socket pool reduces the odds of an attacker correctly guessing the port number being used for a DNS response.
Windows Server 2012 has a default DNS socket pool size of 2,500 that can be adjusted to between 0 and 10,000. Exclusions can also be made when a port range is already being used by another service.
To configure the socket pool size, use the following command:
dnscmd /Config /SocketPoolSize <value>
An exclusion range can be added to prevent interference with other applications or services using this command:
dnscmd /Config /SocketPoolExcludedPortRanges <excluded port ranges>
Although increasing the size of the DNS socket pool reduces the odds of an attacker compromising a DNS query, by itself this is not enough of a security measure to protect your DNS infrastructure from attack. It is recommended that you manage the size of the DNS socket pool in addition to implementing other measures such as DNSSEC.
Cache locking
DNS clients and servers make heavy use of caching to expedite DNS queries. Rather than performing a full recursive DNS query when resolving a name, both the client and server will typically cache the response for later use. The amount of time a query response is cached is determined by a TTL value assigned to the RR.
Although a DNS response is cached, it can be updated if new information about the RR is obtained. A potential vulnerability known as cache poisoning involves the possibility of an attacker updating a cached record with false information and a long TTL value. When cache poisoning occurs, it allows an attacker to direct users to a server under his or her control.
To prevent cache poisoning, the DNS server role has supported cache locking since Windows Server 2008 R2, whereby updates to cached records are restricted for a percentage of the record’s TTL. For example, if the cache locking value is set to 50, cached records will not be allowed to update for the first half of the TTL. The default cache value is set to 100, meaning updates are not allowed to cached records until the TTL has fully expired. The cache value can be configured using the following command:
Set-DnsServerCache -LockingPercent <percent>
In addition to configuring cache locking, the Set-DnsServerCache cmdlet can also set the maximum size of the DNS cache, the maximum TTL values for positive and negative responses, and other security-related settings for the DNS server cache.
Supporting DNS interoperability
In a network that depends largely on Active Directory, it is usually beneficial to use DNS servers running Windows Server because of the tight integration between Active Directory and DNS. When a client connects to the network and attempts to authenticate to a domain controller, DNS is the service used to locate a nearby domain controller. When a domain controller is added to the domain, it not only creates A and PTR records in the DNS zone but service location (SRV) records are also created for clients to find a server to perform authentication. Figure 2-15 shows some of the SRV records used to direct clients to Active Directory domain controllers through both Kerberos and LDAP, as well as the _mcds.contoso.com zone. Many other services and applications, such as Microsoft Exchange and Lync, use SRV records in DNS to refer clients to the appropriate server.
FIGURE 2-15 SRV records used for Active Directory
DNS servers running Windows Server do support interoperability with the Berkeley Internet Name Domain (BIND) software that is an alternative DNS server for UNIX platforms. Interoperability with BIND must be enabled in the DNS server properties by selecting the Enable BIND Secondaries check box on the Advanced tab.
Exam Tip
Name resolution touches so many other aspects of your infrastructure that DNS is one of the most important topics when studying for this exam. DNS interoperability with other services is crucial for understanding and diagnosing problems with everything from Remote Access to Active Directory.
Managing DNS replication with application partitions
Not properly planning for DNS replication can have a performance impact on your network. If your DNS servers are not strategically placed within your network with properly configured replication, all sorts of performance issues could result.
Application partitions, also known as directory partitions, are part of the Active Directory database that control replication between domain controllers in your domain and forest. Active Directory-integrated DNS zones can have the following replication scopes:
To all DNS servers running on domain controllers within the domain
To all DNS servers running on domain controllers within the forest
To all domain controllers within the domain
To domain controllers within the directory partition
To fine-tune the replication of your DNS zone, you can create custom application directory partitions and selectively choose which domain controllers should receive a copy of your DNS zone.
To illustrate the utility of using application partitions to define replication scope, Figure 2-16 shows a simple Active Directory forest. Each domain in the forest contains three domain controllers, two of which are also DNS servers. For this scenario, consider the need to replicate the contoso.com zone to one DNS server in each domain. Choosing to replicate to all DNS servers operating as domain controllers in the domain results in two instances of the DNS zone, both in the contoso.com domain. Expanding to DNS servers on domain controllers in the entire forest results in your zone being hosted on six separate servers throughout the forest. Replicating to all domain controllers within the domain doesn’t meet the requirement, either. Only by creating an application directory partition and adding one DNS server/domain controller from each domain can the requirement be met.
FIGURE 2-16 A simple Active Directory forest structure showing the utility of application partitions
There are three steps to configure DNS zone replication to domain controllers within an application partition. The first is to create the application partition by using the following command:
AddDnsServerDirectoryPartition < Partition FQDN >
The second step is to include one or more domain controllers in the application partition by using this command, specifying the –ComputerName parameter if a remote server is to be added instead of the computer running the command:
Register-DnsServerDirectoryPartition <Partition FQDN> -ComputerName <ServerName>
The final step of configuring your DNS zone to replicate using an application partition is to configure replication in the DNS zone properties window. On the General tab, click the Change button next to Replication, select the option for replication within a directory partition and then choose the partition you want to use for the zone (see Figure 2-17).
FIGURE 2-17 Choosing a directory partition as the DNS replication scope
Providing name resolution for IPv6
DNS for IPv6 clients is not significantly different from that for IPv4. The biggest difference is in the creation of RRs: Instead of using A records for name resolution, IPv6 hosts require AAAA (quad-A) records.
Reverse lookups also function in the same way as IPv4 reverse lookups. When used to create a reverse lookup, the New Zone Wizard allows you to choose between IPv4 and IPv6, as shown in Figure 2-18. The wizard continues by asking for an IPv6 address prefix that is configured using the prefix combined with the length, as in FE80::/64. Finally, you configure dynamic updates for the reverse lookup zone.
FIGURE 2-18 Creating a reverse lookup zone for IPv6
Supporting single-label DNS name resolution
Single-label DNS name resolution refers to queries for names without a DNS hierarchy. Whereas host1.contoso.com is an FQDN, host1 is a single-label DNS name. In most cases, the requirement for a query of this sort stems from a legacy application that was designed to use Windows Internet Name Service (WINS) for name resolution. In some cases, this sort of query can be handled through the use of DNS suffixes, but there can be a significant performance impact if the number of DNS suffixes on a client becomes too large.
A better solution for single-label DNS name resolution is through the use of a GlobalNames Zone (GNZ). A GNZ allows you to support single-label DNS name resolution throughout a forest without having to query every domain using DNS suffixes. In fact, Windows Server 2012 uses the GNZ before using DNS suffixes by default, thereby improving name resolution performance. The GNZ can be enabled and configured using the Set-DnsServerGlobalNameZone PowerShell cmdlet.
Designing a DNS zone hierarchy
The structure created by DNS is useful for many reasons. First is the performance offered by simplifying the traversal of such a hierarchy. Within a few levels of DNS, you can limit the number of possible results from millions to just a handful. Another benefit of DNS is the separation provided between domains and subdomains, enabling management to be delegated to an organization or even split between corporate branches.
DNS zone hierarchy
A DNS zone hierarchy refers to the tree-like structure defining the parent-child relationships within DNS. The DNS structure begins with the root domain, designated by a dot (.). In most FQDNs, the dot is assumed. Up one level from the root domain are the top-level domains such as .com, .net, .edu, and .gov (to name just a few). Following immediately after are private or corporate domains such as microsoft.com or technet.com. These domains can be further divided into support.microsoft.com or other subdomains, as shown in Figure 2-19.
FIGURE 2-19 A basic illustration of the structure provided by the DNS namespace
When a DNS client performs a typical name resolution query for www.microsoft.com, the DNS server begins at the root domain (.), is directed to the appropriate top-level domain (.com), progresses to the corporate domain (microsoft.com), and finally is directed to an individual host (www.microsoft.com).
DNS zone delegation
DNS zone delegations define the parent-child relationships within a DNS hierarchy. In Windows Server 2012, a zone delegation is created by using the New Delegation Wizard (shown in Figure 2-20), which is found by right-clicking a zone in DNS Manager and choosing New Delegation.
FIGURE 2-20 The New Delegation Wizard allows you to create a delegation by naming the new domain and referencing the DNS servers hosting the delegation
Through the process of completing the New Delegation Wizard, several things happen. First, the new DNS zone is created. Second, RRs are created in the parent zone referencing the new delegation. A name server (NS) record contains the name of the new child domain as well as the name server, whereas an A (or AAAA) record is created to resolve the name server to its IP address. This allows the parent domain to refer DNS clients to the child to resolve names within that domain.
Disjoint namespaces
A disjoint namespace is one whose Active Directory and DNS namespaces do not match. The most common configurations for disjoint namespaces are a multidomain Active Directory forest paired with a single DNS zone or a single Active Directory domain with multiple DNS zones.
Disjoint namespaces do require some manual configuration and maintenance. The DNS records used to refer clients to domain controllers are created automatically based on the client’s Active Directory domain. If this domain differs from the desired location, these RRs must be created and managed manually.
Some applications are developed under the assumption that the Active Directory and DNS namespaces will match. When implementing a disjoint namespace, ensure application compatibility through proper testing.
Thought experiment: Implementing secure DNS name resolution
In this thought experiment, apply what you’ve learned about this objective. You can find answers to these questions in the “Answers” section at the end of this chapter.
Your company’s DNS infrastructure recently came under increased scrutiny due to a security breach suffered by a competitor. You have been tasked with evaluating the existing DNS configuration and offer solutions to improve the security of your DNS implementation. However, you have no budget to implement these changes because of the current economic climate.
Upon evaluation, you find several things you decide are worth bringing to the attention of your superiors:
Your DNS zones are file backed.
Each of your DNS servers is configured to use DNS root servers for recursive name resolution.
DNS records are dynamically updated.
Your corporation uses the fabrikam.com zone to provide name resolution from both internal and external DNS clients.
DNSSEC is not currently implemented.
All your DNS servers are domain controllers, and each is running Windows Server 2012 R2.
Answer the following questions based on this scenario:
1. What security features are you missing out on because your DNS zones are file backed? Do any of the other criteria listed increase your concern over this finding? Which piece of information makes resolving this vulnerability a simple fix?
2. Do you have any concern about your DNS servers being configured to use DNS root servers directly? Is there any security or performance benefit to using this configuration?
3. How big a weakness is using the same domain for internal and external name resolution? What steps could you take to mitigate these risks?
4. What benefit would the implementation of DNSSEC provide? What sort of performance impact can you expect from implementing DNSSEC?
Objective summary
There are several inherent performance and security benefits to using Active Directory–integrated DNS zones, including secure zone transfers and secure dynamic updates to DNS RRs.
DNSSEC uses digital signatures to provide a validation process for DNS name resolution queries, including zone delegation, individual RRs, and negative responses for nonexistent RRs.
DNSSEC uses several new RRs to provide validation: DS and DNSKEY records validate the relationship between a parent domain and a child, RRSIG records validate individual RRs such as A or alias (CNAME) records, and negative responses are validated using NSEC and NSEC3 records. NSEC3 records also prevent zone walking.
Windows Server 2012 introduced the capability to sign a DNS zone while online, and Windows Server 2012 R2 provides the capability to use a DNS server hosting a file backed DNS zone to perform the Key Master role.
The DNS socket pool, the range of ports used to provide DNS responses, can reduce the likelihood of an attacker correctly guessing the random port used for a DNS response.
Cache locking configures how soon into a cached RR’s TTL the record can be updated. By default, cache locking is set to a value of 100, meaning that updates will not be accepted until after the TTL has fully expired.
Replication of Active Directory–integrated DNS zones can be fine-tuned using application partitions.
When DNS suffixes are not a feasible option for single-label DNS name resolution, the GlobalNames Zone (GNZ) can be used.
The DNS structure is built using zone delegation, which identifies a child domain’s relationship to its parent. This relationship is defined by an NS record and an A record in the parent zone.
Disjoint namespaces are those whose DNS namespace does not match its Active Directory namespace; RRs required for Active Directory to function must be created manually in this configuration.
Objective review
Answer the following questions to test your knowledge of the information in this objective. You can find the answers to these questions and explanations of why each answer choice is correct or incorrect in the “Answers” section at the end of this chapter.
1. What security feature is offered by DNSSEC?
A. Encrypted zone transfers
B. Validated name resolution
C. Secure dynamic updates
D. Prevention of cache poisoning
2. Which of the following DNSSEC RRs validate the relationship between a parent and child domain? (Choose all that apply.)
A. DS
B. SOA
C. NS
D. DNSKEY
3. What capability is provided to manage the way DNS clients use DNSSEC for validated name resolution?
A. EDNS0
B. DO flag
C. NRPT
D. CD flag
4. What process results in the size of the DNS zone increasing by four times?
A. Key rollover
B. Recursion
C. Zone delegation
D. Zone signing
5. How would you manage the range of ports used for replies to DNS queries?
A. Enable cache locking
B. Use Active Directory–integrated zones
C. Implement application pools
D. Configure the DNS socket pool
6. What type of value is used to configure DNS cache locking?
A. Percentage of the RR’s TTL value
B. Number of days
C. Number of hours
D. Percentage of a day
7. What benefit is offered by using application pools to manage DNS zone replication?
A. Increased control over replication
B. Secure zone transfers
C. Capability to replicate to DNS servers that are not domain controllers
D. Interoperability with BIND servers
8. Which type of RR is not used for IPv6 hosts?
A. CNAME
B. A
C. NS
D. AAAA
9. What methods can be used to provide single-label DNS name resolution? (Choose all that apply.)
A. Zone delegation
B. DNS suffixes
C. GlobalNames zone
D. Disjoint namespaces
10. What RRs are used to create a zone delegation? (Choose all that apply.)
A. NS
B. CNAME
C. A
D. DS
11. What aspect of your DNS infrastructure must be handled manually in disjoint namespaces?
A. Secure dynamic updates
B. SRV records for Active Directory domain controllers
C. DNSSEC zone signing
D. Zone transfers
Objective 2.3: Design and manage an IP address management solution
One of the new features in Windows Server 2012 that was a long time coming is IP Address Management (IPAM). The IPAM feature lets you monitor and manage DHCP servers, DNS servers, and IP address spaces throughout your enterprise in a single interface. Through the IPAM provisioning process, scheduled tasks are deployed on DHCP and DNS servers. These scheduled tasks are used to retrieve information from these servers, aggregating this data on the IPAM server.
The information within IPAM can be monitored to forecast IP address usage and diagnose problem areas at a glance. Thresholds can be configured to flag IP address pools or scopes approaching capacity. IPAM provides several ways to group and organize different collections of data, including the ability to configure custom fields.
After your IP usage data is captured and organized, IPAM features multiple capabilities for analyzing the data it contains. Comprehensive auditing, including the ability to track the IP address used by computers and users, allows you to track IP address usage over time for security and compliance needs. Windows PowerShell cmdlets provide import and export capability for IP utilization data, giving administrators great flexibility over where their data comes from and what they can do with it.
This objective covers how to:
Manage IP addresses with IPAM
Provision IPAM
Plan for IPAM server placement
Manage IPAM database storage
Configure role-based access control with IPAM
Configure IPAM auditing
Manage and monitor multiple DHCP and DNS servers with IPAM
Migrate IP addresses
Configure data collection for IPAM
Integrate IPAM with Virtual Machine Manager (VMM)
Managing IP addresses with IPAM
DHCP and its associated IP address pools have always been an area that requires monitoring to ensure that the pool of available IP addresses does not run out. If an address pool runs out of addresses, users might not be able to access the network. The introduction of the IPAM feature in Windows Server 2012 provides administrators with an out-of-the-box tool for monitoring and managing IP address usage throughout the enterprise.
IPAM on Windows Server 2012 enables you to track usage of both public and private IP address pools. Windows Server 2012 R2 integrates with the Microsoft System Center 2012 R2 Virtual Machine Manager (VMM) to manage IP addresses used in a virtual environment.
IPAM not only allows you to monitor IP address usage but several aspects of managing your DHCP infrastructure can also be performed using IPAM. In Windows Server 2012, management capabilities include configuring scopes and allocating static IP addresses. With Windows Server 2012 R2, you can manage DHCP failover, DHCP policies, and DHCP filters, to name a few.
There are some limitations as to where IPAM can be installed and how many servers it can manage. IPAM cannot be installed on a domain controller. DHCP servers can have the IPAM feature installed, but automatic DHCP server discovery is disabled. IPAM has been tested to support the following numbers:
150 DHCP servers
500 DNS servers
40,000 DHCP scopes
350 DNS zones
IPAM supports monitoring of a single Active Directory forest, and only domain joined computers can be monitored. IPAM can manage only DHCP or DNS servers that run on Windows Server 2008 and later, but data from third-party or legacy servers can be imported into the IPAM database using Windows PowerShell cmdlets such as Import-IpamRange or Import-IpamSubnet. Auditing and usage data contained in IPAM is not purged automatically, but can be purged manually as needed.
Provisioning IPAM
In addition to installing the IPAM feature, there are several steps involved in provisioning IPAM, or configuring communication between servers running the DHCP and DNS server roles and the IPAM server. This is the most critical and most complex step of deploying IPAM. There are two ways to handle IPAM provisioning: using Group Policy or manually.
Group Policy-based provisioning
Because of the number of steps involved in provisioning IPAM and the number of servers typically requiring configuration, most organizations should use Group Policy to implement IPAM because it automates much of the process.
IPAM features the Provision IPAM Wizard to walk you through the majority of the IPAM provisioning process, allowing you to choose a database type and location, and begins the process of creating IPAM-related Group Policy Objects (GPOs) using a prefix of your choosing. Figure 2-21 shows the Provision IPAM Wizard at the Select Provisioning Method step.
FIGURE 2-21 The Provision IPAM Wizard enables you to choose a provisioning method and a prefix for naming IPAM-related GPOs
These GPOs are not actually created and linked to a domain until the Windows PowerShell cmdlet Invoke-IpamGpoProvisioning is run. This cmdlet creates three GPOs and links them to the domain specified using the -Domain parameter. The following command is an example of creating these GPOs in the contoso.com domain, using a prefix of IPAM and an IPAM server name of IPAM1:
Invoke-IpamGpoProvisioning –Domain contoso.com –GpoPrefixName IPAM –IpamServerFqdn
IPAM1.contoso.com –Force
Next, you must configure the IPAM server to begin server discovery by clicking the Start Server Discovery link in the IPAM section of Server Manager. The Configure Server Discovery sheet requires you to choose a domain and the types of servers to discover. The server discovery process can take some time, depending on the size of your network and the number of servers to be discovered.
As servers are discovered, their management status must be set to Managed so IPAM can begin fully monitoring and managing them. Only users with domain administrator privileges can mark servers as managed. Individual servers can also be added manually by clicking Add Server in the Tasks menu.
Exam Tip
The IPAM provisioning process sets it apart from other tools you’re familiar with in Windows Server. Ensure that you know each of the steps in Group Policy–based provisioning and have a general understanding of what each step does in the provisioning process.
Manual provisioning
Although Group Policy–based provisioning is the best option for most scenarios, understanding the manual provisioning process ensures that you know the requirements for provisioning IPAM and configuring each server type.
Provisioning IPAM manually requires you to make the necessary configuration changes to each server that you want to manage with IPAM. Each server can be configured through the manual application of the IPAM GPOs, or individual settings can be applied to each server. These changes include modifying firewall rules, adding security groups and modifying memberships, configuring file shares, and restarting services. Each managed server type (DNS, DHCP, and domain controller) has a separate set of requirements and a different process for manual provisioning. After you manually provision the required servers, the server discovery process can begin, and you can configure servers as managed in IPAM.
Creating a security group in Active Directory to contain IPAM servers is recommended prior to manually provisioning IPAM. Group Policy–based provisioning uses the IPAMUG group in Active Directory to give IPAM servers membership in local security groups.
DHCP Servers
Several requirements must be met before IPAM can retrieve data from DHCP servers. Incoming traffic from IPAM to the DHCP server must be allowed through the Windows Firewall using the following built-in rules:
DHCP Server (RPC-In)
DHCP Server (RPCSS-In)
File and Printer Sharing (NB-Session-In)
File and Printer Sharing (SMB-In)
Remote Event Log Management (RPC)
Remote Event Log Management (RPC-EPMAP)
Remote Service Management (RPC)
Remote Service Management (RPC-EPMAP)
Because IPAM communicates directly with DHCP servers, the IPAM server must be given access to DHCP resources through membership in the DHCP Users local security group. In addition, membership in the Event Log Readers group is used to audit DHCP-related events. After the IPAM server is given membership in the appropriate groups, the DHCP Server service must be restarted.
For monitoring IP address utilization, IPAM requires access to the DHCP audit file location, which is contained in the C:\Windows\System32\DHCP folder by default. This folder should be shared as dhcpaudit, with Read permissions given to the IPAM server or the Active Directory group containing the IPAM servers. Best practices dictate that the default permissions for the Everyone group should be removed from this share.
DNS Servers
For monitoring and managing DNS servers, inbound IPAM traffic must be allowed through the Windows Firewall. As with DHCP servers, the following rules are predefined and need only be enabled:
RPC (TCP, Incoming)
DNS (UDP, Incoming)
DNS (TCP, Incoming)
RPC Endpoint Mapper (TCP, Incoming)
Remote Service Management (RPC-EPMAP)
Remote Service Management (NP-In)
Remote Service Management (RPC)
Remote Event Log Management (RPC-EPMAP)
Remote Event Log Management (RPC)
If the DNS server is also a domain controller, membership in the Event Log Readers group in Active Directory must be provided to the IPAM server to monitor DNS events. If the DNS server is not a domain controller, the local Event Log Readers group should be used.
Monitoring of the DNS server’s event log must also be enabled for the IPAM server, which is accomplished by modifying the CustomSD value in the HKLM\SYSTEM\CurrentControlSet\Services\EventLog\DNS Server registry key. To enable event logging, you must first find the security identifier (SID) value that belongs to the IPAM server’s Active Directory computer object. A computer object’s SID can be retrieved using the Get-ADComputer PowerShell cmdlet. The CustomSD value must be modified, with the following string being appended to the end of the existing value: (A;;0x1;;; <IPAM SID>).
To enable management of DNS servers, IPAM must be given DNS administration permissions. On DNS servers also operating as a domain controller, the IPAM server or a group in which it has membership must be added to the Security tab of the DNS server properties window. For management of other DNS servers the IPAM server should be added to the local Administrators group.
Domain Controllers
To enable monitoring of Active Directory domain controllers or NPS servers, the following two Windows Firewall rules must be enabled:
Remote Event Log Management (RPC-EPMAP)
Remote Event Log Management (RPC)
Membership in the Event Log Readers security group in Active Directory provides IPAM the capability to monitor domain controllers. NPS servers can give access to event logs through membership in the local Event Log Readers group.
Planning for IPAM server placement
A big aspect of planning an IPAM deployment is determining the placement of each IPAM server, which could come down to incremental management levels (discussed in the next section) or reasons that are purely geographic. Regardless of the reasoning, there are two primary strategies for IPAM server placement: distributed and centralized.
A distributed IPAM deployment typically places one IPAM server in each site in the enterprise, as illustrated in Figure 2-22. This configuration allows for delegation of management tasks and control over address spaces. Each IPAM server is configured to discover and manage servers within its area of control.
FIGURE 2-22 A distributed IPAM deployment with an IPAM server deployed in each location
Centralized IPAM deployments feature a single IPAM server, typically at corporate headquarters. This scenario works better for organizations that have a centralized IT presence or little to no server footprint at branch locations. Figure 2-23 shows a centralized IPAM deployment.
FIGURE 2-23 IPAM deployed in a centralized placement
Large organizations can opt for a hybrid of these two situations, as shown in Figure 2-24. IPAM provides no means of synchronizing databases, but servers can be configured as managed in multiple IPAM servers. Another option is to leverage the IPAM Import and Export cmdlets to move data between servers, providing visibility at multiple levels of the organization.
FIGURE 2-24 A hybrid IPAM deployment, with an IPAM server in each location and a second IPAM server deployed at corporate headquarters to monitor and manage IP address utilization throughout the enterprise
Managing IPAM database storage
Windows Server 2012 R2 also introduces the capability to store the IPAM database in an external Microsoft SQL database instead of the default Windows Internal Database (WID). This storage option not only increases performance and reliability but also provides additional methods of accessing, analyzing, and reporting on the data collected. The choice between database storage types is presented during the provisioning process, or the Move-IpamDatabase cmdlet can be used as shown here:
Move-IpamDatabase -DatabaseServer "ContosoDB" -DatabaseName "IpamDB" -DatabaseAuthType
Windows
Using role-based access control with IPAM
Deploying IPAM servers in each corporate location is one way to delegate control to local administrator groups, but it is not the only method to bring access control to your organization. Role-based access is the preferable method of defining access control over IPAM servers because it provides much more control over the actions that an administrator can perform and the objects that can be controlled.
Although Windows Server 2012 included default groups that gave you some control over an administrator’s access to IPAM, Windows Server 2012 R2 takes this to an entirely new level by making role-based access completely configurable and providing a much finer level of control. Role-based access control for IPAM in Windows Server 2012 R2 manages access control by breaking permissions down into three separate aspects: roles, access scopes, and access policies.
Roles
An IPAM role is a selection of operations that an administrator can perform. By selecting the management tasks appropriate to a user group or administrative role, you can provide users with access to perform their tasks without giving them the ability to affect other areas within IPAM. Several built-in roles are included with IPAM in Windows Server 2012 R2, as shown in Table 2-8.
TABLE 2-8 Default IPAM roles in Windows Server 2012 R2
Custom roles can also be created and should be defined by your organizational structure or business rules. Custom role creation is shown in Figure 2-25.
FIGURE 2-25 Creating a new IPAM role, providing DHCP administration to branch administrators
Access scopes
Access scopes can be used to define the objects an administrative user can monitor or manage. By default, only the Global access scope is provided in Windows Server 2012 R2, providing access to all objects in IPAM. Additional access scopes can be added, as shown in Figure 2-26, and the default access scope can be changed from the Global access scope. Because access scopes are defined in a hierarchy, users with access to the Global access scope can always monitor and manage all objects within IPAM.
FIGURE 2-26 Managing IPAM access scopes
Rather than defining a set of criteria or rules for access scopes, individual objects such as IP address blocks, DHCP scopes, and DNS zones must be placed within an access scope. At this time, none of the PowerShell cmdlets in the IpamServer module supports automated configuration of the access scope.
Access policies
Access policies combine the role and access scope with a user or group, allowing you to assign access to perform specific actions on certain objects to a group of users. Configured using the Add Access Policy window, which is shown in Figure 2-27 and is accessed through the Tasks menu, a user or security group is associated with one or more role and access scope pairings. This allows a single group to be given the ability to perform one role within one access scope and another role in a different access scope.
FIGURE 2-27 Creating a new access policy: combining an Active Directory security group with IPAM roles and access scopes
Exam Tip
Role-based access control in IPAM is a change in Windows Server 2012 R2. Make sure that you understand the differences between roles, access scopes, and access policies, as well as how to configure each.
Configuring IPAM auditing
IPAM simplifies the task of auditing IP-related events. IP address assignment can be tracked using event data retrieved from DHCP servers, DNS servers, and domain controllers. Due to the types of events catalogued on these servers, IPAM enables you to view address assignments based on IP address, host name, or even user name by selecting the appropriate view in the bottom-left corner of the IPAM console (see Figure 2-28).
FIGURE 2-28 Auditing IP address assignments by host name (even records that predate the host name changes are shown)
DHCP and IPAM configuration events can also be audited, easing the troubleshooting process and allowing you to monitor configuration changes made by administrators with delegated permissions.
Audit records remain in the IPAM database indefinitely and must be manually purged using the Purge Event Catalog Data option from the Tasks menu. Purging these audit records have no impact on the event logs of the managed servers.
Managing and monitoring multiple DHCP and DNS servers with IPAM
IPAM provides several methods to view data grouped or aggregated from multiple servers, and in some cases to even edit or configure multiple objects simultaneously.
IPAM enables you to create server groups to monitor and manage multiple servers that share configuration similarities. This functionality is quite different from the similarly named server group feature in Server Manager. Rather than manually adding servers to a group, IPAM allows you to choose one or more criteria on which to create the group, such as site, building, or floor. New servers added to the network that meet the criteria of the group are automatically added.
Exam Tip
The big draw for IPAM is being able to reduce the management workload for administrators. Knowing how to organize servers, address pools, and scopes is a critical aspect of managing IPAM.
Migrating IP addresses
Because IPAM is a new feature in Windows Server 2012, efforts have been made to ease the workload of spinning up the IPAM service. Much of the process is automated, as we’ve already discussed, but some scenarios don’t lend themselves to automatic management and configuration in IPAM. Third-party DHCP servers or static IP addresses might require more hands-on effort than address ranges managed by Microsoft DHCP servers, but there are still tools available to migrate these addresses into IPAM.
The Tasks menu in the IPAM console provides quite a few options for importing different collections of IP information. The Import And Update IP Address Ranges option enables you to choose what service manages the address space, choose a server to handle the management of these addresses, and provide a comma-separated values (CSV) file containing the IP address information for the range you want to manage using IPAM. Additionally, the Import-IpamAddress, Import-IpamRange, and Import-IpamSubnet Windows PowerShell cmdlets can facilitate the migration from other IP address management tools. The Windows Server 2012 IPAM/DHCP integration module, a PowerShell script provided by Microsoft, allows you to quickly read DHCP leases into IPAM using the Invoke-IpamDhcpLease cmdlet. The Windows Server 2012 IPAM/DHCP integration module can be downloaded from http://gallery.technet.microsoft.com/scriptcenter/Windows-Server-2012-f44cefce. A com-prehensive list of PowerShell cmdlets useful for managing IPAM data imports and exports is listed in Table 2-9.
TABLE 2-9 PowerShell cmdlets used for importing or exporting IPAM data
Configuring data collection for IPAM
As part of the IPAM provisioning process, several scheduled tasks are created in order to facilitate data collection. Table 2-10 shows the default frequencies of these tasks. If you have provisioned IPAM using Group Policy, these frequencies can be adjusted within the GPOs.
TABLE 2-10 IPAM data collection tasks
These tasks (with the exception of ServerDiscovery) can be run manually from the IPAM console by selecting one or more servers in the Server Inventory section and choosing Retrieve All Server Data from the context menu. Figure 2-29 shows these tasks as they are listed in Task Scheduler under the Microsoft\Windows\IPAM node.
FIGURE 2-29 IPAM data collection tasks
Integrating IPAM with Virtual Machine Manager (VMM)
For organizations making extensive use of virtual machines (VMs) using Hyper-V, the addition in Windows Server 2012 R2 of integration between IPAM and Virtual Machine Manager (VMM) is a huge benefit. Not only can IPAM monitor and manage network services and IP address usage in your physical network but also logical networks created within VMM. IPAM can be integrated with one or more VMM servers using the IPAM integration plugin for VMM, which manages communication between the VMM and IPAM servers. IPAM servers are added in VMM through the Add Network Service Wizard, found under Networking in the Fabric pane.
Not only can virtual address space usage be tracked but IPAM and VMM also synchronize address space data, usage, and even alerts. Additionally, address spaces can be created within IPAM, and the changes are visible in the VMM console. Conversely, virtual networks and IP address pools created in VMM are automatically displayed in IPAM.
Thought experiment: Planning an enterprise-wide IPAM deployment
In this thought experiment, apply what you’ve learned about this objective. You can find answers to these questions in the “Answers” section at the end of this chapter.
You have been hired at a corporation with a nationwide presence. Your new company has a corporate headquarters, four regional headquarters, and branches distributed throughout each region. Datacenters reside in both the corporate and regional headquarters. Each branch has a single server operating as a domain controller, DNS server, and DHCP server.
One of the first tasks in your new job is to plan for the implementation of IPAM throughout the enterprise. Your primary goal is to provide increased control and visibility of the servers and IP address space within each region.
How would you plan your IPAM implementation based on this scenario?
1. Given the requirements and the organizational structure, what type of deployment would you use and where would you place IPAM servers?
2. How would you manage the provisioning process for each regional IP department to have visibility and control over DHCP servers within its region?
3. What tool would you use to allow regional administrators to have access to DHCP servers, scopes, and IP addresses without giving them control over DNS or IPAM?
4. How could regional administrators use IPAM to view similarly configured DHCP servers together?
5. The CIO wants to receive monthly reports illustrating IP address utilization from each region. What tool does IPAM provide that would facilitate these reports?
6. Why might there be some trouble auditing IP address usage for users travelling between regions if a distributed IPAM deployment is used?
Objective summary
IPAM is a feature that monitors and manages IP address utilization, DHCP servers, DNS servers, and domain controllers.
Windows Server 2012 R2 adds integration with the Microsoft System Center 2012 R2 VMM to the IPAM toolset.
An additional feature of IPAM in Windows Server 2012 R2 is the capability to use an external database running Microsoft SQL Server.
IPAM must be provisioned to begin server discovery and data collection. This process can be handled through Group Policy or manual provisioning.
The IPAM provisioning process involves creating firewall rules, managing file shares, configuring groups and permissions, and implementing scheduled tasks.
IPAM servers can be deployed using a centralized or distributed strategy, or a hybrid of the two.
Role-based access control is introduced in Windows Server 2012 R2, allowing you to create a permissions structure based on roles, access scopes, and access policies.
IPAM allows for auditing of IP address usage as well as DHCP and IPAM configuration.
IPAM handles data collection through scheduled tasks running on the IPAM server and provisioned DHCP servers, DNS servers, and domain controllers.
Objective review
Answer the following questions to test your knowledge of the information in this objective. You can find the answers to these questions and explanations of why each answer choice is correct or incorrect in the “Answers” section at the end of this chapter.
1. What is the first step of implementing IPAM on your network?
A. Starting server discovery
B. Performing a gpupdate /force on each DHCP server
C. Configuring each server as managed
D. Provisioning IPAM
2. What tool on your network facilitates the IPAM provisioning process?
A. Group Policy
B. DHCP configuration options
C. Windows PowerShell
D. Microsoft System Center 2012
3. When using the manual provisioning method, which of the following tasks are typically required for provisioning servers? (Choose all that apply.)
A. Enabling rules in Windows Firewall
B. Creating file shares
C. Restarting the server
D. Assigning group membership
4. On which type of server role can IPAM not be installed?
A. DHCP
B. DNS
C. Active Directory domain controller
D. NPS
5. Which IPAM deployment method is best for organizations with IT support at each location, but no centralized IT department at a corporate headquarters?
A. Centralized
B. Distributed
C. Hybrid
D. Manual
6. Which of the following cannot be managed by IPAM?
A. Windows Server 2012 domain controller
B. Windows Server 2008 DNS server
C. Windows Server 2003 DHCP server
D. Windows Server 2012 R2 DHCP failover
7. Which PowerShell cmdlet allows you to populate IPAM with IP addresses from a properly formatted CSV file?
A. Import-IpamAddress
B. Import-IpamRange
C. Import-IpamSubnet
D. Invoke-IpamDhcpLease
8. Which aspect of role-based access control defines the operations an administrator can perform?
A. Roles
B. Access scopes
C. Access policies
D. Security groups
9. Which IPAM role allows an administrator to monitor and manage DHCP failover relationships within IPAM, while limiting the ability to perform other management tasks?
A. IPAM administrator
B. IPAM DHCP administrator
C. IPAM DHCP scope administrator
D. IP address record administrator
10. What types of information can an administrator use to audit IP address usage? (Choose all that apply.)
A. IP address
B. Host name
C. User name
D. Connection type
11. What scheduled task is not run when the Retrieve All Server Data option is initiated for a server?
A. AddressUtilization
B. Audit
C. ServerDiscovery
D. ServiceMonitoring
Answers
This section contains the solutions to the thought experiments and answers to the lesson review questions in this chapter.
Objective 2.1: Thought experiment
1. DHCP failover in hot standby mode is the ideal solution. In this situation, the local DHCP servers would be configured as the primary server in hot standby mode, and the server residing in the datacenter would be configured as the hot standby.
2. A DHCP Management Pack is best suited to monitor and manage DHCP servers on a large scale. The customer has to implement Microsoft System Center 2012 to use this functionality.
3. DHCPv6 in a stateless configuration would allow the IT department to provide network configuration to DHCP clients without providing an IPv6 address.
4. In Windows Server 2012 R2, DHCP filters can be used to manage DNS registration at this level.
Objective 2.1: Review
1. Correct answer: C
A. Incorrect: A split-scope DHCP configuration results in two DHCP servers sharing a workload, but it is not part of DHCP failover, and each server is responsible for only a portion of the address pool.
B. Incorrect: Hot standby mode is used when one server is to be inactive until the primary server is unavailable.
C. Correct: Load balancing mode allows both partner DHCP servers to provide the full pool of addresses. If one server becomes unavailable, the second server continues to serve the full address pool.
D. Incorrect: A DHCP failover cluster provides the full pool of addresses and can be split between two or more active servers, but is not part of DHCP failover.
2. Correct answer: D
A. Incorrect: Two nodes are the minimum number of nodes for DHCP failover clusters and are the maximum for DHCP failover.
B. Incorrect: DHCP failover clusters support more than six nodes.
C. Incorrect: Failover clusters can be larger than 10 nodes.
D. Correct: The maximum number of nodes supported for DHCP failover clustering is 64.
3. Correct answer: B
A. Incorrect: Although a split-scope configuration would work in this situation, it is not ideal because split scope doesn’t continue to provide the full pool of addresses.
B. Correct: DHCP failover in hot standby mode is the optimal solution because the on-site server responds to all DHCP requests. If the primary server goes down, the off-site hot standby can provide the full pool of addresses.
C. Incorrect: A DHCP failover cluster is not the best fit because a failover cluster requires shared storage, which means that the two servers are typically co-located.
D. Incorrect: DHCP failure in load balancing mode would work, but there is potential for performance problems due to the remote server responding to requests when the local server is still operational.
4. Correct answer: D
A. Incorrect: The scope’s lease duration is not used until the standby server enters the partner down state.
B. Incorrect: Lease durations are configured at the DHCP scope level.
C. Incorrect: Even if the primary DHCP server returns to service, the client’s lease duration is determined by using the Maximum Client Lead Time configured in the failover partnership.
D. Correct: The Maximum Client Lead Time determines both the lease time for DHCP leases provided by the standby server and the amount of time between the primary server becoming unresponsive and the hot standby server going into partner down mode.
5. Correct answer: D
A. Incorrect: DHCP can and should be used to provide DNS information to clients. In some cases the DHCP server should be used to perform dynamic updates to the DNS server.
B. Incorrect: DHCP is not directly interoperable with Active Directory.
C. Incorrect: Virtual private network (VPN) services or DirectAccess can make use of DHCP; the decision to do so depends largely on the VPN infrastructure and scope.
D. Correct: DHCP interoperability with NAP is not recommended because it is the weakest of the NAP enforcement methods. Also, NAP is deprecated in Windows Server 2012 R2.
6. Correct answer: B
A. Incorrect: DHCP filtering enables you to choose which clients to provide addresses to, but doesn’t allow you to provide configuration information without IP addresses.
B. Correct: Stateless DHCPv6 is used to provide configuration information without IP addresses. The IPv6 addresses are acquired through autoconfiguration.
C. Incorrect: DHCP options provide configuration settings through DHCP, but they must be used in conjunction with stateless DHCPv6 to meet the requirement.
D. Incorrect: Stateful DHCPv6 provides both the configuration information and an IPv6 address.
7. Correct answer: B
A. Incorrect: 30 minutes is incorrect.
B. Correct: The DHCP automatic database backup runs every 60 minutes by default. Both the frequency and the backup location can be modified.
C. Incorrect: 12 hours is incorrect.
D. Incorrect: 24 hours is incorrect.
Objective 2.2: Thought experiment
1. File-backed DNS zones miss out on several security benefits enjoyed by Active Directory–integrated zones, most notably secure dynamic updates and zone transfers. The fact that dynamic updates are being allowed in a file-backed zone is a huge concern, but the issue should be easy to resolve because each DNS server is also a domain controller.
2. Ideally, you will have a limited number of DNS servers configured to perform external name resolution. This situation reduces the visibility of your DNS servers to the outside world and also increases the value of caching on these servers, which improves performance.
3. Using the same domain for internal and external name resolution opens the possibility for outside sources to determine host names of internal computer names and IP addresses. Implementing a separate DNS infrastructure specifically for resolving external name resolution requests enables you to selectively choose which computers can be resolved by external DNS clients.
4. DNSSEC simply provides a validation process for DNS name resolution. There are inherent performance impacts due to the zones quadrupling in size, resulting in increased memory and storage needs.
Objective 2.2: Review
1. Correct answer: B
A. Incorrect: DNSSEC does not offer encryption, nor does it secure zone transfers.
B. Correct: DNSSEC validates name resolution through a chain of trust and digitally signed RRs.
C. Incorrect: DNSSEC has no impact on dynamic updates. To secure dynamic updates, you must use an Active Directory–integrated zone.
D. Incorrect: DNSSEC does not protect cached records; you should consider cache locking instead.
2. Correct answers: A, D
A. Correct: To validate a child, DS records reside in a parent domain.
B. Incorrect: SOA records are used in all DNS zones and are not specific to DNSSEC.
C. Incorrect: NS records are used by parent zones to indicate a delegated zone and are not specific to DNSSEC.
D. Correct: DNSKEY records correspond to the DS record hosted in the parent zone.
3. Correct answer: C
A. Incorrect: EDNS0 is the standard that increases the length of the DNS packet. This is required for DNSSEC traffic, but does not manage how clients use DNSSEC for validation.
B. Incorrect: The DO flag indicates to the DNS server that the client is DNSSEC-capable, but the DO flag is governed by the NRPT.
C. Correct: The NRPT is used to configure the way DNSSEC is used for validation, what domains it should be used to validate, and whether a valid response is required.
D. Incorrect: The CD flag allows DNS clients, or more typically the recursive DNS server, to allow for responses that have not yet been validated by DNSSEC.
4. Correct answer: D
A. Incorrect: Key rollover does not result in a drastic expansion in the size of the DNS zone.
B. Incorrect: Recursion has no impact on the size of a DNS zone.
C. Incorrect: Zone delegation does not significantly increase the size of a DNS zone.
D. Correct: Zone signing creates three additional DNSSEC RRs to provide validated responses and negative responses to DNS queries.
5. Correct answer: D
A. Incorrect: Cache locking is used to prevent cached RRs from being updated.
B. Incorrect: Active Directory–integrated zones allow for secure dynamic updates and replication, but do not alter how the DNS server responds to name resolution queries.
C. Incorrect: Application pools are used to manage DNS server replication, but do not change how DNS queries are handled.
D. Correct: The DNS socket pool manages which ports can be used for responding to DNS queries.
6. Correct answer: A
A. Correct: Cache locking configures how soon TTL updates to the RR should be allowed. This value is set to 100 by default, which means that no updates are allowed until the full TTL has expired.
B. Incorrect: Cache locking is not configured by a number of days.
C. Incorrect: You cannot configure cache locking as a number of hours.
D. Incorrect: Cache locking is not configured as a percentage of a day.
7. Correct answer: A
A. Correct: Application pools offer increased control over DNS replication by allowing you to select the domain controllers to be part of the replication scope.
B. Incorrect: The use of application pools indicates that the zone is Active Directory–integrated, so replication is used rather than zone transfers. Regardless, this is not a function of application pools.
C. Incorrect: Application pools require Active Directory–integrated DNS zones, which can be hosted only on DNS servers that are also domain controllers.
D. Incorrect: BIND compatibility is not dependent on application pools.
8. Correct answer: B
A. Incorrect: CNAME records use only name values rather than IP addresses, so they function the same way for IPv4 or IPv6 hosts.
B. Correct: A records accept only IPv4 addresses, so they are replaced by AAAA records with IPv6 hosts.
C. Incorrect: NS records refer to the name server’s host name, which is contained within a corresponding A or AAAA record.
D. Incorrect: AAAA records replace A records for IPv6 hosts.
9. Correct answers: B, C
A. Incorrect: Zone delegations are used to create the hierarchical structure of DNS, whereas single-label DNS names are devoid of any structure.
B. Correct: DNS suffixes can sometimes be used to resolve single-label DNS names.
C. Correct: The GlobalNames zone is used strictly for resolving single-label DNS names.
D. Incorrect: Disjoint namespaces occur when an Active Directory domain differs from the DNS domain.
10. Correct answers: A, C
A. Correct: NS records contain the host name of the name server used to provide name resolution for the child zone.
B. Incorrect: CNAME records are aliases for A records and are not typically used for zone delegations.
C. Correct: An A record is required to resolve the name contained in the NS record to an IP address.
D. Incorrect: DS records are used only in DNSSEC signed zones. Although they do refer to a child domain, they are not part of the zone delegation.
11. Correct answer: B
A. Incorrect: Secure dynamic updates can be handled in a disjoint namespace, although they require some additional configuration.
B. Correct: The SRV records used to refer clients to Active Directory domain controllers must be created manually in a disjoint namespace.
C. Incorrect: DNSSEC signed zones can still be used in disjoint namespaces.
D. Incorrect: Zone transfers are not complicated by the use of disjoint namespaces.
Objective 2.3: Thought experiment
1. Given the requirements, a distributed IPAM deployment is most likely.
2. Perform the IPAM provisioning process in each region, referencing the regional IPAM server. If each region is its own domain, this process can be done at the domain level. If the Active Directory domain structure does not match regional divisions, there might need to be a more advanced Group Policy configuration or manual provisioning.
3. Role-based access is the best method to provide this type of control over security.
4. Server groups enable you to combine servers with configuration similarities.
5. The IPAM Windows PowerShell cmdlets such as Export-IpamAddress, Export-IpamRange, and Export-IpamSubnet can be used to create CSV files for analysis and reporting.
6. A distributed IPAM deployment means that auditing across regions has to be handled across multiple IPAM servers.
Objective 2.3: Review
1. Correct answer: D
A. Incorrect: The server discovery process does not take place until IPAM provisioning has occurred.
B. Incorrect: Updating Group Policy helps accelerate the provisioning process, but provisioning must occur first.
C. Incorrect: In IPAM, configuring servers as managed occurs after provisioning and server discovery.
D. Correct: Provisioning IPAM begins the configuration process of the servers that IPAM manages.
2. Correct answer: A
A. Correct: IPAM features automated provisioning through the use of Group Policy.
B. Incorrect: DHCP configuration options cannot be used to provision IPAM.
C. Incorrect: There are aspects of the provisioning process that are accomplished using Windows PowerShell, and it can be a good tool for a manual provisioning process, but Group Policy provides a better solution in most cases.
D. Incorrect: Microsoft System Center 2012 is not part of the IPAM provisioning process.
3. Correct answers: A, B, D
A. Correct: Each of the server types requires configuration of firewall rules to facilitate communication with the IPAM server.
B. Correct: File shares are sometimes required during IPAM provisioning, as in the case of DHCP auditing.
C. Incorrect: IPAM provisioning does not typically require servers to be restarted.
D. Correct: Group membership is often required to give IPAM the permissions required to monitor and manage different server types.
4. Correct answer: C
A. Incorrect: IPAM can be installed on a DHCP server, but automatic DHCP discovery will be disabled.
B. Incorrect: DNS servers can also function as IPAM servers.
C. Correct: Domain controllers cannot run the IPAM feature.
D. Incorrect: IPAM can be installed on a NPS server.
5. Correct answer: B
A. Incorrect: Centralized deployments of IPAM are best suited for organizations in which IT support is primarily centralized, and network services such as DHCP and DNS are managed from corporate headquarters.
B. Correct: A distributed IPAM deployment allows IT support at each location to monitor and manage its own DHCP servers, DNS servers, and domain controllers.
C. Incorrect: A hybrid deployment isn’t necessary in this situation because no centralized IT department exists.
D. Incorrect: Manual provisioning could certainly be used in this scenario, but it doesn’t specifically address the need for distributed deployment.
6. Correct answer: C
A. Incorrect: Windows Server 2012 domain controllers are monitored to provide information for auditing purposes.
B. Incorrect: DNS servers can be monitored and managed from IPAM, including those using Windows Server 2008.
C. Correct: Windows Server 2003 is not supported to be managed or monitored with IPAM.
D. Incorrect: DHCP failover relationships in Windows Server 2012 R2 can be monitored and managed using IPAM.
7. Correct answer: A
A. Correct: The Import-IpamAddress cmdlet is used to import IP address information into IPAM from a file.
B. Incorrect: Import-IpamRange allows you to acquire data pertaining to the full IP address range, not individual addresses.
C. Incorrect: The Import-IpamSubnet PowerShell cmdlet populates IPAM with data from your organization’s subnets. Information about specific IP addresses is not included.
D. Incorrect: Invoke-IpamDhcpLease is part of the Windows Server 2012 IPAM/DHCP integration module, and is used to import IP address information directly from a DHCP server, not a CSV file.
8. Correct answer: A
A. Correct: Roles determine what actions can be performed by an administrator.
B. Incorrect: Access scopes control the areas an administrator can manage.
C. Incorrect: Access policies combine a role, access scope, and security group or user to apply permissions to administrators.
D. Incorrect: Security groups can contain users, but cannot limit a user’s access without being tied to a role and an access scope using an access policy.
9. Correct answer: B
A. Incorrect: The IPAM administrator role would give administrators the ability to manage DHCP failover relationships, but would provide several other abilities as well.
B. Correct: Administrators given the IPAM DHCP administrator role can manage all aspects of DHCP within IPAM, including failover relationships.
C. Incorrect: IPAM DHCP scope administrators do not have the necessary permissions to manage failover relationships.
D. Incorrect: The IP address record administrator role does not provide the ability to administer DHCP servers or failover relationships.
10. Correct answers: A, B, C
A. Correct: IPAM audits can be conducted to determine what computers and users have obtained a particular IP address.
B. Correct: Using a host name, IPAM can be audited to determine which IP addresses have been obtained by a specific computer.
C. Correct: Collecting data from DHCP, DNS, and domain controllers allows IPAM to provide audits by user name.
D. Incorrect: IPAM does not provide a means to audit by network connection type.
11. Correct answer: C
A. Incorrect: The AddressUtilization task is one of six tasks initiated when data collection is triggered manually.
B. Incorrect: The Audit task runs when the Retrieve All Server Data option is used in Server Inventory.
C. Correct: ServerDiscovery is not executed during a manual data collection.
D. Incorrect: ServiceMonitoring gets triggered during the manual data-collection process.