Network Optimization - CompTIA Network+ N10-006 Cert Guide (2015)

CompTIA Network+ N10-006 Cert Guide (2015)

Chapter 9. Network Optimization

Upon completion of this chapter, you will be able to answer the following questions:

Image Why is high availability a requirement in today’s network designs, and what mechanisms can help provide that high availability?

Image What various technologies optimize network performance?

Image What QoS mechanisms can help optimize network performance?

Image Using what you have learned in this and previous chapters, how do you design a SOHO network based on a set of requirements?

If you saw the movie Field of Dreams, you’ve heard this statement: “If you build it, they will come.” That statement has proven itself to be true in today’s networks. These networks, which were once relegated to the domain of data, can now carry voice and video. These additional media types, besides mission-critical data applications, need a network to be up and available for their users.

For example, think about how often your telephone service has been unavailable versus how often your data network has been unavailable. Unfortunately, data networks have traditionally been less reliable than voice networks; however, today’s data networks often are voice networks, contributing to this increased demand for uptime. Unified voice services such as call control and communication gateways can be integrated into one or more network devices, leveraging the bandwidth available on the LAN and WAN.

Beyond basic availability, today’s networks need optimization tools to make the most of their available bandwidth. This book already addressed several network optimization tools, which are reviewed in this chapter.

Quality of service (QoS) is an entire category of network-optimization tools. QoS, as one example, can give priority treatment to latency-sensitive (delay-sensitive) traffic, such as Voice over IP (VoIP). This chapter devotes a section to exploring these tools.

Finally, based on what you learn in this chapter and what you have learned in previous chapters, you are presented with a design challenge. Specifically, a case study presents various design requirements for a small office/home office (SOHO) network. After you create your own network design, you can compare your solution with a suggested solution, keeping in mind that multiple solutions are valid.

Foundation Topics

High Availability

If a network switch or router stops operating correctly (meaning that a network fault occurs), communication through the network could be disrupted, resulting in a network becoming unavailable to its users. Therefore, network availability, called uptime, is a major design consideration. This consideration might, for example, lead you to add fault-tolerant devices and fault-tolerant links between those devices. This section discusses the measurement of high availability along with a collection of high-availability design considerations.

High-Availability Measurement

The availability of a network is measured by its uptime during a year. For example, if a network has five nines of availability, it is up 99.999 percent of the time, which translates to a maximum of 5 minutes of downtime per year. If a network has six nines of availability (it is up 99.9999 percent of the time), it is down less than 30 seconds per year.

As a designer, one of your goals is to select components, topologies, and features that maximize network availability within certain parameters (for example, a budget). Be careful not to confuse availability with reliability. A reliablenetwork, as an example, does not drop many packets. However, an available network is up and operational.


Note

The availability of a network increases as the mean time to repair (MTTR) of the network devices decreases and as the mean time between failures (MTBF) increases. Therefore, selecting reliable networking devices that are quick to repair is crucial to a high-availability design.


Fault-Tolerant Network Design

Two approaches to designing a fault-tolerant network are as follows:

Image

Image Single points of failure: If the failure of a single network device or link (for example, a switch, router, or WAN connection) would result in a network becoming unavailable, that single device or link is a potential single point of failure. To eliminate single points of failure from your design, you might include redundant links and redundant hardware. For example, some high-end Ethernet switches support two power supplies, and if one power supply fails, the switch continues to operate by using the backup power supply. Link redundancy, as shown in Figure 9-1, can be achieved by using more than one physical link. If a single link between a switch and a router fails, the network would not go down because of the link redundancy that is in place.

Image

Figure 9-1 Redundant Network with Single Points of Failure

Image No single points of failure: A network without single point of failure contains redundant network-infrastructure components (for example, switches and routers). In addition, these redundant devices are interconnected with redundant links. Although a network host could have two network interface cards (NICs), each of which connects to a different switch, such a design is rarely implemented because of the increased costs. Instead, as shown in Figure 9-2, a network with no single points of failure in the backbone allows any single switch or router in the backbone to fail or any single link in the backbone to fail, while maintaining end-to-end network connectivity.

Image

Figure 9-2 Redundant Network with No Single Point of Failure

These two approaches to fault-tolerant network design can be used together to increase a network’s availability even further.

Hardware Redundancy

Having redundant route processors in a switch or router chassis improves the chassis’s reliability. If a multilayer switch has two route processors, for example, one of the route processors could be active, with the other route processor standing by to take over in the event the active processor became unavailable.

An end system can have redundant NICs. The two modes of NIC redundancy are as follows:

Image Active-active: Both NICs are active at the same time, and each has its own MAC address. This makes troubleshooting more complex, while giving you slightly better performance than the Active-Standby approach.

Image Active-standby: Only one NIC is active at a time. This approach allows the client to appear to have a single MAC address and IP address, even in the event of a NIC failure.

NIC redundancy is most often found in strategic network hosts, rather than in end-user client computers, because of the expense and administrative overhead incurred with a redundant NIC configuration.

Layer 3 Redundancy

End systems not running a routing protocol point to a default gateway. The default gateway is traditionally the IP address of a router on the local subnet. However, if the default gateway router fails, the end systems are unable to leave their subnet. Chapter 4, “Ethernet Technology,” introduced two first-hop redundancy technologies (which offer Layer 3 redundancy):

Image

Image Hot Standby Router Protocol (HSRP): A Cisco proprietary approach to first-hop redundancy. Figure 9-3 shows a sample HSRP topology.

Image

Figure 9-3 HSRP Sample Topology

In Figure 9-3, workstation A is configured with a default gateway (that is, a next-hop gateway) of 172.16.1.3. To prevent the default gateway from becoming a single point of failure, HSRP enables routers R1 and R2 to each act as the default gateway, supporting the virtual IP address of the HSRP group (172.16.1.3), although only one of the routers will act as the default gateway at any one time. Under normal conditions, router R1 (that is, the active router) forwards packets sent to virtual IP 172.16.1.3. However, if router R1 is unavailable, router R2 (that is, the standby router) can take over and start forwarding traffic sent to 172.16.1.3. Notice that neither router R1 nor R2 has a physical interface with an IP address of 172.16.1.3. Instead, a logical router (called a virtual router), which is serviced by either router R1 or R2, maintains the 172.16.1.3 IP address.

Image Common Address Redundancy Protocol (CARP): CARP is an open-standard variant of HSRP.

Image Virtual Router Redundancy Protocol (VRRP): VRRP is an IETF open standard that operates in a similar method to Cisco’s proprietary HSRP.

Image Gateway Load Balancing Protocol (GLBP): GLBP is another first-hop redundancy protocol that is proprietary to Cisco Systems.

With each of these technologies, the MAC address and the IP address of a default gateway can be serviced by more than one router (or multilayer switch). Therefore, if a default gateway becomes unavailable, the other router (or multilayer switch) can take over, still servicing the same MAC and IP addresses.

Another type of Layer 3 redundancy is achieved by having multiple links between devices and selecting a routing protocol that load balances over the links. Link Aggregation Control Protocol (LACP), discussed in Chapter 4, enables you to assign multiple physical links to a logical interface, which appears as a single link to a route processor. Figure 9-4 illustrates a network topology using LACP.

Image

Figure 9-4 LACP Sample Topology

Design Considerations for High-Availability Networks

When designing networks for high availability, answer the following questions:

Image

Image Where will module and chassis redundancy be used?

Image What software redundancy features are appropriate?

Image What protocol characteristics affect design requirements?

Image What redundancy features should be used to provide power to an infrastructure device (for example, using an uninterruptible power supply [UPS] or a generator)?

Image What redundancy features should be used to maintain environmental conditions (for example, dual air-conditioning units)?


Note

Module redundancy provides redundancy within a chassis by allowing one module to take over in the event that a primary module fails. Chassis redundancy provides redundancy by having more than one chassis, thus providing a path from the source to the destination, even in the event of a chassis or link failure.


High-Availability Best Practices

The following steps are five best practices for designing high-availability networks:

Image

1. Examine technical goals.

2. Identify the budget to fund high-availability features.

3. Categorize business applications into profiles, each of which requires a certain level of availability.

4. Establish performance standards for high-availability solutions.

5. Define how to manage and measure the high-availability solution.

Although existing networks can be retrofitted to make them highly available networks, network designers can often reduce such expenses by integrating high-availability best practices and technologies into the initial design of a network.

Content Caching

Chapter 3, “Network Components,” introduced the concept of a content engine (also known as a caching engine). A content engine is a network appliance that can receive a copy of content stored elsewhere (for example, a video presentation located on a server at a corporate headquarters) and serve that content to local clients, thus reducing the bandwidth burden on an IP WAN. Figure 9-5 shows a sample topology using a content engine as a network optimization technology.

Image

Figure 9-5 Content Engine Sample Topology

Load Balancing

Another network optimization technology introduced in Chapter 3 was content switching, which allows a request coming into a server farm to be distributed across multiple servers containing identical content. This approach to load balancing lightens the load on individual servers in a server farm and allows servers to be taken out of the farm for maintenance without disrupting access to the server farm’s data. Figure 9-6 illustrates a sample content switching topology, which performs load balancing across five servers (containing identical content) in a server farm.

Image

Figure 9-6 Content Switching Sample Topology

QoS Technologies

Quality of service (QoS) is a suite of technologies that allows you to strategically optimize network performance for select traffic types. For example, in today’s converged networks (that is, networks simultaneously transporting voice, video, and data), some applications (for example, voice) might be more intolerant of delay (that is, latency) than other applications (for example, an FTP file transfer is less latency sensitive than a Voice over IP [VoIP] call). Fortunately, through the use of QoS technologies, you can identify which traffic types need to be sent first, how much bandwidth to allocate to various traffic types, which traffic types should be dropped first in the event of congestion, and how to make the most efficient use of the relatively limited bandwidth of an IP WAN. This section introduces QoS and a collection of QoS mechanisms.

Introduction to QoS

A lack of bandwidth is the overshadowing issue for most quality problems. Specifically, when there is a lack of bandwidth, packets might suffer from one or more of the symptoms shown in Table 9-1.

Image

Image

Table 9-1 Three Categories of Quality Issues

Fortunately, QoS features available on many routers and switches can recognize important traffic and then treat that traffic in a special way. For example, you might want to allocate 128 Kbps of bandwidth for your VoIP traffic and give that traffic priority treatment.

Consider water flowing through a series of pipes with varying diameters. The water’s flow rate through those pipes is limited to the water’s flow rate through the pipe with the smallest diameter. Similarly, as a packet travels from its source to its destination, its effective bandwidth is the bandwidth of the slowest link along that path. For example, consider Figure 9-7. Notice that the slowest link speed is 256 Kbps. This weakest link becomes the effective bandwidth between client and server.

Image

Figure 9-7 Effective Bandwidth of 256 kbps

Because the primary challenge of QoS is a lack of bandwidth, the logical question is, “How do we increase available bandwidth?” A knee-jerk response to that question is often, “Add more bandwidth.” Although there is no substitute for more bandwidth, it often comes at a relatively high cost.

Compare your network to a highway system in a large city. During rush hour, the lanes of the highway are congested, but the lanes might be underutilized during other periods of the day. Instead of just building more lanes to accommodate peak traffic rates, the highway engineers might add a carpool lane. Cars with two or more riders can use the reserved carpool lane because they have a higher priority on the highway. Similarly, you can use QoS features to give your mission-critical applications higher-priority treatment in times of network congestion.

QoS Configuration Steps

The mission statement of QoS could read something like this: “To categorize traffic and apply a policy to those traffic categories, in accordance with a QoS policy.” Understanding this underlying purpose of QoS can help you better understand the three basic steps to QoS configuration:

Step 1. Determine network performance requirements for various traffic types. For example, consider these design recommendations for voice, video, and data traffic:

Image Voice: No more than 150 ms of one-way delay; no more than 30 ms of jitter; and no more than 1 percent packet loss.

Image Video: No more than 150 ms of one-way delay for interactive voice applications (for example, video conferencing); no more than 30 ms of jitter; no more than 1 percent of packet loss.

Image Data: Applications have varying delay and loss requirements. Therefore, data applications should be categorized into predefined classes of traffic, where each class is configured with specific delay and loss characteristics.

Step 2. Categorize traffic into specific categories. For example, you might have a category named Low Delay, and you decide to place voice and video packets in that category. You might also have a Low Priority class, where you place traffic such as music downloads from the Internet.

Step 3. Document your QoS policy and make it available to your users. Then, for example, if a user complains that his network-gaming applications are running slowly, you can point him to your corporate QoS policy, which describes how applications such as network gaming have best-effort treatment while VoIP traffic receives priority treatment.

The actual implementation of these steps varies based on the specific device you are configuring. In some cases, you might be using the command-line interface (CLI) of a router or switch. In other cases, you might have some sort of graphical-user interface (GUI) through which you configure QoS on your routers and switches.

QoS Components

QoS features are categorized into one of the three categories shown in Table 9-2.

Image

Image

Table 9-2 Three Categories of QoS Mechanisms

Figure 9-8 summarizes these three QoS categories.

Image

Figure 9-8 QoS Categories

QoS Mechanisms

As previously mentioned, a DiffServ approach to QoS marks traffic. However, for markings to impact the behavior of traffic, a QoS tool must reference those markings and alter the packets’ treatment based on them. The following is a collection of commonly used QoS mechanisms:

Image

Image Classification

Image Marking

Image Congestion management

Image Congestion avoidance

Image Policing and shaping

Image Link efficiency

The following sections describe each QoS mechanism in detail.

Classification

Image

Classification is the process of placing traffic into different categories. Multiple characteristics can be used for classification. For example, POP3, IMAP, SMTP, and Exchange traffic could all be placed in an E-MAIL class. Classification does not, however, alter any bits in the frame or packet.

Marking

Image

Marking alters bits within a frame, cell, or packet to indicate how the network should treat that traffic. Marking alone does not change how the network treats a packet. Other tools (such as queuing tools) can, however, reference those markings and make decisions based on the markings.

Various packet markings exist. For example, inside an IPv4 header, there is a byte called type of service (ToS). You can mark packets, using bits within the ToS byte, using either IP Precedence or differentiated service code point (DSCP), as shown in Figure 9-9.

Image

Figure 9-9 ToS Byte

IP Precedence uses the 3 leftmost bits in the ToS byte. With 3 bits at its disposal, IP Precedence markings can range from 0 to 7. However, 6 and 7 should not be used because those values are reserved for network use.

For more granularity, you might choose DSCP, which uses the 6 leftmost bits in the ToS byte. Six bits yield 64 possible values (0–63).

Congestion Management

Image

When a device such as a switch or a router receives traffic faster than it can be transmitted, the device attempts to buffer (or store) the extra traffic until bandwidth becomes available. This buffering process is called queuing or congestion management. However, queuing algorithms (for example, weighted fair queuing [WFQ], low-latency queuing [LLQ], or weighted round-robin [WRR]) can be configured on routers and switches. These algorithms divide an interface’s buffer into multiple logical queues, as shown in Figure 9-10. The queuing algorithm then empties packets from those logical queues in a sequence and amount determined by the algorithm’s configuration. For example, traffic could first be sent from a priority queue (which might contain VoIP packets) up to a certain bandwidth limit, after which packets could be sent from a different queue.

Image

Figure 9-10 Queuing Example

Congestion Avoidance

Image

If an interface’s output queue fills to capacity, newly arriving packets are discarded (or tail dropped). To prevent this behavior, a congestion-avoidance technique called random early detection (RED) can be used, as illustrated in Figure 9-11. After a queue depth reaches a configurable level (minimum threshold), RED introduces the possibility of a packet discard. As the queue depth continues to increase, the possibility of a discard increases until a configurable maximum threshold is reached. After the queue depth exceeds the maximum threshold for traffic with a specific priority, there is a 100 percent chance of discard for those traffic types. If those discarded packets are TCP based (connection oriented), the sender knows which packets are discarded and can retransmit those dropped packets. However, if those dropped packets are UDP based (that is, connectionless), the sender does not receive an indication that the packets were dropped.

Image

Figure 9-11 Random Early Detection (RED)

Policing and Shaping

Image

Instead of making a minimum amount of bandwidth available for specific traffic types, you might want to limit available bandwidth. Both policing and traffic-shaping tools can accomplish this objective. Collectively, these tools are called traffic conditioners.

Policing can be used in either the inbound or the outbound direction, and it typically discards packets that exceed the configured rate limit, which you can think of as a speed limit for specific traffic types. Because policing drops packets, resulting in retransmissions, it is recommended for higher-speed interfaces.

Shaping buffers (and therefore delays) traffic exceeding a configured rate. Therefore, shaping is recommended for slower-speed interfaces.

Because traffic shaping (and policing) can limit the speed of packets exiting a router, a question arises: “How do we send traffic out of an interface at a rate that is less than the physical clock rate of the interface?” For this to be possible, shaping and policing tools do not transmit all the time. Specifically, they send a certain number of bits or bytes at line rate, and then they stop sending, until a specific timing interval (for example, one-eighth of a second) is reached. After the timing interval is reached, the interface again sends a specific amount of traffic at line rate. It stops and waits for the next timing interval to occur. This process continually repeats, allowing an interface to send an average bandwidth that might be below the physical speed of the interface. This average bandwidth is called the committed information rate (CIR). The number of bits (the unit of measure used with shaping tools) or bytes (the unit of measure used with policing tools) that are sent during a timing interval is called the committed burst (Bc). The timing interval is written as Tc.

For example, imagine that you have a physical line rate of 128 Kbps, but the CIR is only 64 Kbps. Also, assume there are eight timing intervals in a second (that is, Tc = 1/8th of a second = 125 ms), and during each of those timing intervals, 8000 bits (the committed burst parameter) are sent at line rate. Therefore, over the period of a second, 8000 bits were sent (at line rate) eight times, for a grand total of 64,000 bits per second, which is the CIR. Figure 9-12illustrates this shaping of traffic to 64 Kbps on a line with a rate of 128 Kbps.

Image

Figure 9-12 Traffic Shaping

If all the Bc bits (or bytes) were not sent during a timing interval, there is an option to bank those bits and use them during a future timing interval. The parameter that allows this storing of unused potential bandwidth is called the excess burst (Be) parameter. The Be parameter in a shaping configuration specifies the maximum number of bits or bytes that can be sent in excess of the Bc during a timing interval, if those bits are indeed available. For those bits or bytes to be available, they must have gone unused during previous timing intervals. Policing tools, however, use the Be parameter to specify the maximum number of bytes that can be sent during a timing interval. Therefore, in a policing configuration, if the Bc equals the Be, no excess bursting occurs. If excess bursting occurs, policing tools consider this excess traffic as exceeding traffic. Traffic that conforms to (does not exceed) a specified CIR is considered by a policing tool to be conforming traffic.

The relationship between the Tc, Bc, and CIR is given with this formula: CIR = Bc / Tc. Alternately, the formula can be written as Tc = Bc / CIR. Therefore, if you want a smaller timing interval, configure a smaller Bc.

Link Efficiency

Image

To make the most of the limited bandwidth available on slower-speed links, you might choose to implement compression or link fragmentation and interleaving (LFI). Although you could compress a packet’s payload or header to conserve bandwidth, as one example, consider header compression. With VoIP packets, the Layer 3 and Layer 4 headers total 40 bytes in size. However, depending on how you encode voice, the voice payload might be only 20 bytes in size. As a result, VoIP benefits most from header compression, as opposed to payload compression.

VoIP sends packets using Real-Time Transport Protocol (RTP), which is a Layer 4 protocol. RTP is then encapsulated inside UDP (another Layer 4 protocol), which is then encapsulated inside IP (at Layer 3). RTP header compression (cRTP) can take the Layer 3 and Layer 4 headers and compress them to only 2 or 4 bytes in size (2 bytes if UDP checksums are not used and 4 bytes if UDP checksums are used), as shown in Figure 9-13.

Image

Figure 9-13 RTP Header Compression (cRTP)

LFI addresses the issue of serialization delay, which is the amount of time required for a packet to exit an interface. A large data packet, for example, on a slower-speed link might create excessive delay for a voice packet because of the time required for the data packet to exit the interface. LFI fragments the large packets and interleaves the smaller packets among the fragments, reducing the serialization delay experienced by the smaller packets. Figure 9-14shows the operation of LFI, where the packets labeled D are data packets, and the packets labeled V are voice packets.

Image

Figure 9-14 Link Fragmentation and Interleaving (LFI)

Case Study: SOHO Network Design

Based on what you learned from previous chapters and this chapter, this section challenges you to create a network design to meet a collection of criteria. Because network design is part science and part art, multiple designs can meet the specified requirements. However, as a reference, this section presents one solution, against which you can contrast your solution.

Case Study Scenario

While working through your design, consider the following:

Image Meeting all requirements

Image Media distance limitations

Image Network device selection

Image Environmental factors

Image Compatibility with existing and future equipment

The following are your design scenario and design criteria for this case study:

Image Company ABC leases two buildings (building A and building B) in a large office park, as shown in Figure 9-15. The office park has a conduit system that allows physical media to run between buildings. The distance (via the conduit system) between building A and building B is 1 km.

Image

Figure 9-15 Case Study Topology

Image Company ABC will use the Class B address of 172.16.0.0/16 for its sites. You should subnet this classful network not only to support the two buildings (one subnet per building), but to allow as many as five total sites in the future, as Company ABC continues to grow.

Image Company ABC needs to connect to the Internet, supporting a speed of at least 30 Mbps, and this connection should come into building A.

Image Cost is a primary design consideration, while performance is a secondary design consideration.

Image Each building contains various Wi-Fi client devices (for example, smartphones, tablets, and laptops).

Image Table 9-3 identifies the number of hosts contained in each building and the number of floors contained in each building.

Image

Table 9-3 Case Study Information for Buildings A and B

Your design should include the following information:

Image Network address and subnet mask for building A

Image Network address and subnet mask for building B

Image Layer 1 media selection

Image Layer 2 device selection

Image Layer 3 device selection

Image Wireless design

Image Any design elements based on environmental considerations

Image An explanation of where cost savings were created from performance trade-offs

Image A topological diagram of the proposed design

On separate sheets of paper, create your network design. After your design is complete, perform a sanity check by contrasting the listed criteria against your design. Finally, while keeping in mind that multiple designs could meet the design criteria, you can review the following suggested solution. In the real world, reviewing the logic behind other designs can often give you a fresh perspective for future designs.

Suggested Solution

This suggested solution begins by IP address allocation. Then, consideration is given to the Layer 1 media, followed by Layer 2 and Layer 3 devices. Wireless design decisions are presented. Design elements based on environmental factors are discussed. The suggested solution also addresses how cost savings were achieved through performance trade-offs. Finally, a topological diagram of the suggested solution is presented.

IP Addressing

Questions you might need to ask when designing the IP addressing of a network include the following:

Image How many hosts do you need to support (now and in the future)?

Image How many subnets do you need to support (now and in the future)?

From the scenario, you know that each subnet must accommodate at least 200 hosts. Also, you know that you must accommodate at least five subnets. In this solution, the subnet mask is based on the number of required subnets. Eight subnets are supported with 3 borrowed bits, while two borrowed only support four subnets, based on this formula:

Number of subnets = 2s

where s is the number of borrowed bits

With 3 borrowed bits, we have 13 bits left for host IP addressing, which is much more than needed to accommodate 200 host IP addresses. These 3 borrowed bits yield a subnet mask of 255.255.224.0. Because the third octet is the last octet to contain a binary 1 in the subnet mask, the third octet is the interesting octet.

The block size can be calculated by subtracting the subnet decimal value in the interesting octet from 256: 256 – 224 = 32. Because the block size is 32 and the interesting octet is the third octet, the following subnets are created with the 255.255.224.0 (that is, /19) subnet mask:

Image 172.16.0.0 /19

Image 172.16.32.0 /19

Image 172.16.64.0 /19

Image 172.16.96.0 /19

Image 172.16.128.0 /19

Image 172.16.160.0 /19

Image 172.16.192.0 /19

Image 172.16.224.0 /19

The first two subnets are selected for the building A and building B subnet, as shown in Table 9-4.

Image

Table 9-4 Case Study Suggested Solution: Network Addresses

Layer 1 Media

Questions you might need to ask when selecting the Layer 1 media types of a network include the following:

Image What speeds need to be supported (now and in the future)?

Image What distances between devices need to be supported (now and in the future)?

Within each building, Category 6a (Cat 6a) unshielded-twisted pair (UTP) cabling is selected to interconnect network components. The installation is based on Gigabit Ethernet. However, if 10-Gigabit Ethernet devices are installed in the future, Cat 6a is rated for 10GBASE-T for distances as long as 100 m.

The 1-km distance between building A and building B is too far for UTP cabling. Therefore, multimode fiber (MMF) is selected. The speed of the fiber link will be 1 Gbps. Table 9-5 summarizes these media selections.

Image

Table 9-5 Case Study Suggested Solution: Layer 1 Media

Layer 2 Devices

Questions you might need to ask when selecting Layer 2 devices in a network include the following:

Image Where will the switches be located?

Image What port densities are required on the switches (now and in the future)?

Image What switch features need to be supported (for example, STP or LACP)?

Image What media types are used to connect to the switches?

A collection of Ethernet switches interconnect network devices within each building. Assume the 200 hosts in building A are distributed relatively evenly across the three floors (each floor contains approximately 67 hosts). Therefore, each floor will have a wiring closet containing two Ethernet switches: one 48-port density switch and one 24-port density switch. Each switch is connected to a multilayer switch located in building A using four connections logically bundled together using Link Aggregation Control Protocol (LACP).

Within building B, two Ethernet switches, each with 48 ports, and one Ethernet switch, with 24 ports, are installed in a wiring closet. These switches are interconnected in a stacked configuration, using four connections logically bundled together with LACP. One of the switches has an MMF port, which allows it to connect via fiber to building A’s multilayer switch.

Table 9-6 summarizes the switch selections.

Image

Table 9-6 Case Study Suggested Solution: Layer 2 Devices

Layer 3 Devices

Questions you might need to ask when selecting Layer 3 devices for a network include the following:

Image How many interfaces are needed (now and in the future)?

Image What types of interfaces need to be supported (now and in the future)?

Image What routing protocol(s) need to be supported?

Image What router features (for example, HSRP or security features) need to be supported?

Layer 3 devices consist of a multilayer switch located in building A. All switches within building A home back to the multilayer switch using four LACP-bundled links. The multilayer switch is equipped with at least one MMF port, which allows a connection with one of the Ethernet switches in building B.

The multilayer switch connects to a router via a Fast Ethernet connection. This router contains a serial interface, which connects to the Internet via a T3 connection.

Wireless Design

Questions you might need to ask when designing the wireless portion of a network include the following:

Image What wireless speeds need to be supported (now and in the future)?

Image What distances need to be supported between wireless devices and wireless access points (now and in the future)?

Image What IEEE wireless standards need to be supported?

Image What channels should be used?

Image Where should wireless access points be located?

Because the network needs to support various Wi-Fi clients, the 2.4-GHz band is chosen. Within building A, a wireless access point (AP) is placed on each floor of the building. To avoid interference, the nonoverlapping channels of 1, 6, and 11 are chosen. The 2.4-GHz band also allows compatibility with IEEE 802.11b/g/n.

Within building B, a single wireless AP accommodates Wi-Fi clients. Table 9-7 summarizes the wireless AP selection.

Image

Table 9-7 Case Study Suggested Solution: Wireless AP Selection

Environmental Factors

Questions you might need to ask when considering environmental factors of a network design include the following:

Image What temperature or humidity controls exist in the rooms containing network equipment?

Image What power redundancy systems are needed to provide power to network equipment in the event of a power outage?

Because the multilayer switch in building A could be a single point of failure for the entire network, the multilayer switch is placed in a well-ventilated room, which can help dissipate heat in the event of an air-conditioning failure. To further enhance the availability of the multilayer switch, the switch is connected to a UPS, which can help the multilayer switch continue to run for a brief time in the event of a power outage. Protection against an extended power outage could be achieved with the addition of a generator. However, no generator is included in this design because of budgetary reasons.

Cost Savings Versus Performance

When assimilating all the previously gathered design elements, you need to weigh budgetary constraints against network performance metrics. In this example, Gigabit Ethernet was chosen over 10-Gigabit Ethernet. In addition, the link between building A and building B could become a bottleneck because it runs at a speed of 1 Gbps, although it transports an aggregation of multiple 1 Gbps. However, cost savings are achieved by using 1 Gbps switch interfaces as opposed to 10 Gbps interfaces or a bundle of multiple 1 Gbps fiber links.

Topology

Figure 9-16 shows the topology of the proposed design based on the collection of previously listed design decisions.

Image

Figure 9-16 Case Study Proposed Topology

Real-World Case Study

Acme Inc.’s network design includes fault tolerance at several points in the network. The uplinks that go to the wiring closets from the MDF downstairs are implemented as redundant pairs, so that if a single pair fails or a single interface fails, the other fiber pair and associated interfaces can continue to forward traffic. The routing function is located downstairs, and each VLAN (and associated subnet) has a pair of routers acting as an HSRP group.

The firewalls that control traffic at the edge of the company’s networks are also set up in an active-active failover pair.

A dedicated VLAN just for voice traffic on the wired network has been set up with the appropriate marking of traffic. Routers and switches have been configured to identify voice traffic based on its markings, and if congestion is present the voice traffic will receive priority treatment for forwarding over the network.

The active directory servers that the company is using internally are running on a virtualized hardware platform using VMware’s vSphere. The feature of fault tolerance (FT) (which is offered by VMware) will have a backup copy of the active directory server(s) available in the event the primary servers fail.

A VPN over the Internet will be used (via a second service provider) to connect the branch and headquarters offices if the Multiprotocol Label Switching (MPLS) path over the primary WAN through the primary service provider fails.

Abnormally high levels of Internet Control Message Protocol (ICMP) packets that are heading to the headquarters site from the Internet will be rate-limited at the service provider. This will reduce the potential for an ICMP-based attack that is attempting to consume all the bandwidth available to the HQ site.

Summary

The main topics covered in this chapter are the following:

Image Network availability was discussed, including how availability is measured and can be achieved through redundant designs.

Image Performance optimization strategies were discussed, including the use of content caching, link aggregation, and load balancing.

Image Various QoS technologies were reviewed, with an emphasis on traffic shaping, which can limit the rate of data transmission on a WAN link to the CIR.

Image You were given a case study, where you were challenged to design a network to meet a collection of criteria.

Exam Preparation Tasks

Review All the Key Topics

Review the most important topics from inside the chapter, noted with the Key Topic icon in the outer margin of the page. Table 9-8 lists these key topics and the page numbers where each is found.

Image

Table 9-8 Key Topics for Chapter 9

Complete Tables and Lists from Memory

Print a copy of Appendix D, “Memory Tables” (found on the DVD), or at least the section for this chapter, and complete the tables and lists from memory. Appendix E, “Memory Table Answer Key,” also on the DVD, includes the completed tables and lists so you can check your work.

Define Key Terms

Define the following key terms from this chapter, and check your answers in the Glossary:

availability

reliability

Common Address Redundancy Protocol (CARP)

uninterruptible power supply (UPS)

latency

jitter

integrated services (IntServ)

differentiated services

classification

marking

congestion management

congestion avoidance

policing

traffic shaping

committed information rate (CIR)

link efficiency

Review Questions

The answers to these review questions are in Appendix A, “Answers to Review Questions.”

1. If a network has the five nines of availability, how much downtime does it experience per year?

a. 30 seconds

b. 5 minutes

c. 12 minutes

d. 26 minutes

2. What mode of NIC redundancy uses has only one NIC active at a time?

a. Publisher-subscriber

b. Client-server

c. Active-standby

d. Active-subscriber

3. What performance optimization technology uses a network appliance, which can receive a copy of content stored elsewhere (for example, a video presentation located on a server at a corporate headquarters), and serves that content to local clients, thus reducing the bandwidth burden on an IP WAN?

a. Content engine

b. Load balancer

c. LACP

d. CARP

4. A lack of bandwidth can lead to which QoS issues? (Choose three.)

a. Delay

b. Jitter

c. Prioritization

d. Packet drops

5. What is the maximum recommended one-way delay for voice traffic?

a. 25 ms

b. 75 ms

c. 125 ms

d. 150 ms

6. Which of the following QoS mechanisms is considered an IntServ mechanism?

a. LLQ

b. RSVP

c. RED

d. cRTP

7. Identify the congestion-avoidance mechanism from the following list of QoS tools.

a. LLQ

b. RSVP

c. RED

d. cRTP

8. Which traffic-shaping parameter is a measure of the average number of bits transmitted during a timing interval?

a. CIR

b. Tc

c. Bc

d. Be

9. RTP header compression can compress the combined Layer 3 and Layer 4 headers from 40 bytes down to how many bytes?

a. 1–3 bytes

b. 2–4 bytes

c. 3–5 bytes

d. 4–6 bytes

10. What type of delay is the amount of time required for a packet to exit a router’s serial interface?

a. Serialization delay

b. Packetization delay

c. Propagation delay

d. Queuing delay