Enterprise Campus Network Design - Designing Campus Networks - CCNP Routing and Switching SWITCH 300-115 Official Cert Guide (2015)

CCNP Routing and Switching SWITCH 300-115 Official Cert Guide (2015)

Part I. Designing Campus Networks

Chapter 1. Enterprise Campus Network Design

This chapter covers the following topics that you need to master for the CCNP SWITCH exam:

Image Hierarchical Network Design: This section details a three-layer hierarchical structure of campus network designs.

Image Modular Network Design: This section covers the process of designing a campus network, based on breaking it into functional modules. You also learn how to size and scale the modules in a design.

This chapter presents a logical design process that you can use to build a new switched campus network or to modify and improve an existing network. Networks can be designed in layers using a set of building blocks that can organize and streamline even a large, complex campus network. These building blocks can then be placed using several campus design models to provide maximum efficiency, functionality, and scalability.

“Do I Know This Already?” Quiz

The “Do I Know This Already?” quiz allows you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in doubt based on your answers to these questions or your own assessment of your knowledge of the topics, read the entire chapter. Table 1-1 outlines the major headings in this chapter and the “Do I Know This Already?” quiz questions that go with them. You can find the answers in Appendix A, “Answers to the ‘Do I Know This Already?’ Quizzes.”

Image

Table 1-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping

1. Where does a collision domain exist in a switched network?

a. On a single switch port

b. Across all switch ports

c. On a single VLAN

d. Across all VLANs

2. Where does a broadcast domain exist in a switched network?

a. On a single switch port

b. Across all switch ports

c. On a single VLAN

d. Across all VLANs

3. What is a VLAN primarily used for?

a. To segment a collision domain

b. To segment a broadcast domain

c. To segment an autonomous system

d. To segment a spanning-tree domain

4. How many layers are recommended in the hierarchical campus network design model?

a. 1

b. 2

c. 3

d. 4

e. 7

5. What is the purpose of breaking a campus network into a hierarchical design?

a. To facilitate documentation

b. To follow political or organizational policies

c. To make the network predictable and scalable

d. To make the network more redundant and secure

6. End-user PCs should be connected into which of the following hierarchical layers?

a. Distribution layer

b. Common layer

c. Access layer

d. Core layer

7. In which OSI layer should devices in the distribution layer typically operate?

a. Layer 1

b. Layer 2

c. Layer 3

d. Layer 4

8. A hierarchical network’s distribution layer aggregates which of the following?

a. Core switches

b. Broadcast domains

c. Routing updates

d. Access layer switches

9. In the core layer of a hierarchical network, which of the following are aggregated?

a. Routing tables

b. Packet filters

c. Distribution switches

d. Access layer switches

10. In a properly designed hierarchical network, a broadcast from one PC is confined to which one of the following?

a. One access layer switch port

b. One access layer switch

c. One switch block

d. The entire campus network

11. Which one or more of the following are the components of a typical switch block?

a. Access layer switches

b. Distribution layer switches

c. Core layer switches

d. E-commerce servers

e. Service provider switches

12. Which of the following are common types of core, or backbone, designs? (Choose all that apply.)

a. Collapsed core

b. Loop-free core

c. Dual core

d. Layered core

e. Multinode core

13. What is the maximum number of access layer switches that can connect into a single distribution layer switch?

a. 1

b. 2

c. Limited only by the number of ports on the access layer switch

d. Limited only by the number of ports on the distribution layer switch

e. Unlimited

14. A switch block should be sized according to which two of the following parameters? (Choose all that apply.)

a. The number of access layer users

b. A maximum of 250 access layer users

c. A study of the traffic patterns and flows

d. The amount of rack space available

e. The number of servers accessed by users

15. What evidence can be seen when a switch block is too large? (Choose all that apply.)

a. IP address space is exhausted.

b. You run out of access layer switch ports.

c. Broadcast traffic becomes excessive.

d. Traffic is throttled at the distribution layer switches.

e. Network congestion occurs.

16. How many distribution switches should be built into each switch block?

a. 1

b. 2

c. 4

d. 8

17. Which are the most important aspects to consider when designing the core layer in a large network? (Choose all that apply.)

a. Low cost

b. Switches that can efficiently forward traffic, even when every uplink is at 100 percent capacity

c. High port density of high-speed ports

d. A low number of Layer 3 routing peers

Foundation Topics

Hierarchical Network Design

A campus network is an enterprise network consisting of many LANs in one or more buildings, all connected and all usually in the same geographic area. A company typically owns the entire campus network and the physical wiring. Campus networks commonly consist of wired Ethernet LANs and shared wireless LANs.

An understanding of traffic flow is a vital part of the campus network design. You might be able to leverage high-speed LAN technologies and “throw bandwidth” at a network to improve traffic movement. However, the emphasis should be on providing an overall design that is tuned to known, studied, or predicted traffic flows. The network traffic can then be effectively moved and managed, and you can scale the campus network to support future needs.

As a starting point, consider the simple network shown in Figure 1-1. A collection of PCs, printers, and servers are all connected to the same network segment and use the 192.168.1.0 subnet. All devices on this network segment must share the available bandwidth.

Image

Figure 1-1 Simple Shared Ethernet Network

Recall that if two or more hosts try to transmit at the same time on a shared network, their frames will collide and interfere. When collisions occur, all hosts must become silent and wait to retransmit their data. The boundary around such a shared network is called a collision domain. In Figure 1-1, the entire shared segment represents one collision domain.

A network segment with six hosts might not seem crowded. Suppose the segment contains hundreds of hosts instead. Now the network might not perform very well if many of the hosts are competing to use the shared media. Through network segmentation, you can reduce the number of stations on a segment. This, in turn, reduces the size of the collision domain and lowers the probability of collisions because fewer stations will try to transmit at a given time.

Broadcast traffic can also present a performance problem on a Layer 2 network because all broadcast frames flood to reach all hosts on a network segment. If the segment is large, the broadcast traffic can grow in proportion and monopolize the available bandwidth. In addition, all hosts on the segment must listen to and process every broadcast frame. To contain broadcast traffic, the idea is to provide a barrier at the edge of a LAN segment so that broadcasts cannot pass or be forwarded outward. The extent of a Layer 2 network, where a broadcast frame can reach, is known as abroadcast domain.

To limit the size of a collision domain, you can connect smaller numbers of hosts to individual switch interfaces. Ideally, each host should connect to a dedicated switch interface so that they can operate in full-duplex mode, preventing collisions altogether. Switch interfaces do not propagate collisions, so each interface becomes its own collision domain—even if several interfaces belong to a common VLAN.

In contrast, when broadcast traffic is forwarded, it is flooded across switch interface boundaries. In fact, broadcast frames will reach every switch interface in a VLAN. In other words, a VLAN defines the extent of a broadcast domain. To reduce the size of a broadcast domain, you can segment a network or break it up into smaller Layer 2 VLANs. The smaller VLANs must be connected by a Layer 3 device, such as a router or a multilayer switch, as shown in Figure 1-2. The simple network of Figure 1-1 now has two segments or VLANs interconnected by Switch A, a multilayer switch. A Layer 3 device cannot propagate a collision condition from one segment to another, and it will not forward broadcasts between segments.

Image

Figure 1-2 Example of Network Segmentation

The network might continue to grow as more users and devices are added to it. Switch A has a limited number of ports, so it cannot directly connect to every device. Instead, the network segments can be grown by adding a new switch to each, as shown in Figure 1-3.

Image

Figure 1-3 Expanding a Segmented Network

Switch B aggregates traffic to and from VLAN 1, while Switch C aggregates VLAN 2. As the network continues to grow, more VLANs can be added to support additional applications or user communities. As an example, Figure 1-4 shows how Voice over IP (VoIP) has been implemented by placing IP phones into two new VLANs (10 and 20). The same two aggregating switches can easily support the new VLANs.

Image

Figure 1-4 Network Growth Through New VLANs

Predictable Network Model

Ideally, you should design a network with a predictable behavior in mind to offer low maintenance and high availability. For example, a campus network needs to recover from failures and topology changes quickly and in a predetermined manner. You should scale the network to easily support future expansions and upgrades. With a wide variety of multiprotocol and multicast traffic, the network should be capable of efficiently connecting users with the resources they need, regardless of location.

In other words, design the network around traffic flows rather than a particular type of traffic. Ideally, the network should be arranged so that all end users are located at a consistent distance from the resources they need to use. If one user at one corner of the network passes through two switches to reach an email server, any other user at any other location in the network should also require two switch hops for email service.

Image

Cisco has refined a hierarchical approach to network design that enables network designers to organize the network into distinct layers of devices. The resulting network is efficient, intelligent, scalable, and easily managed.

Figure 1-4 can be redrawn to emphasize the hierarchy that is emerging. In Figure 1-5, two layers become apparent: the access layer, where switches are placed closest to the end users; and the distribution layer, where access layer switches are aggregated.

Image

Figure 1-5 Two-Layer Network Hierarchy Emerges

As the network continues to grow with more buildings, more floors, and larger groups of users, the number of access switches increases. As a result, the number of distribution switches increases. Now things have scaled to the point where the distribution switches need to be aggregated. This is done by adding a third layer to the hierarchy, the core layer, as shown in Figure 1-6.

Image

Figure 1-6 Core Layer Emerges

Traffic flows in a campus network can be classified as three types, based on where the network service or resource is located in relation to the end user. Figure 1-7 illustrates the flow types between a PC and some file servers, along with three different paths the traffic might take through the three layers of a network. Table 1-2 also lists the types and the extent of the campus network that is crossed going from any user to the service.

Image

Figure 1-7 Traffic Flow Paths Through a Network Hierarchy

Image

Table 1-2 Types of Network Services

Notice how easily the traffic paths can be described. Regardless of where the user is located, the traffic path always begins at the access layer and progresses into the distribution and perhaps into the core layers. Even a path between two users at opposite ends of the network becomes a consistent and predictable access > distribution > core > distribution > access layer.

Each layer has attributes that provide both physical and logical network functions at the appropriate point in the campus network. Understanding each layer and its functions or limitations is important to properly apply the layer in the design process.

Access Layer

The access layer exists where the end users are connected to the network. Access switches usually provide Layer 2 (VLAN) connectivity between users. Devices in this layer, sometimes called building access switches, should have the following capabilities:

Image

Image Low cost per switch port

Image High port density

Image Scalable uplinks to higher layers

Image High availability

Image Ability to converge network services (that is, data, voice, video)

Image Security features and quality of service (QoS)

Distribution Layer

The distribution layer provides interconnection between the campus network’s access and core layers. Devices in this layer, sometimes called building distribution switches, should have the following capabilities:

Image

Image Aggregation of multiple access layer switches

Image High Layer 3 routing throughput for packet handling

Image Security and policy-based connectivity functions

Image QoS features

Image Scalable and redundant high-speed links to the core and access layers

In the distribution layer, uplinks from all access layer devices are aggregated, or come together. The distribution layer switches must be capable of processing the total volume of traffic from all the connected devices. These switches should have a high port density of high-speed links to support the collection of access layer switches.

VLANs and broadcast domains converge at the distribution layer, requiring routing, filtering, and security. The switches at this layer also must be capable of routing packets with high throughput.

Notice that the distribution layer usually is a Layer 3 boundary, where routing meets the VLANs of the access layer.

Core Layer

A campus network’s core layer provides connectivity between all distribution layer devices. The core, sometimes referred to as the backbone, must be capable of switching traffic as efficiently as possible. Core switches should have the following attributes:

Image

Image Very high Layer 3 routing throughput

Image No costly or unnecessary packet manipulations (access lists, packet filtering)

Image Redundancy and resilience for high availability

Image Advanced QoS functions

Devices in a campus network’s core layer or backbone should be optimized for high-performance switching. Because the core layer must handle large amounts of campus-wide data, the core layer should be designed with simplicity and efficiency in mind.

Although campus network design is presented as a three-layer approach (access, distribution, and core layers), the hierarchy can be collapsed or simplified in certain cases. For example, small or medium-size campus networks might not have the size or volume requirements that would require the functions of all three layers. In that case, you could combine the distribution and core layers for simplicity and cost savings. When the distribution and core layers are combined into a single layer of switches, a collapsed core network results.

Modular Network Design

Designing a new network that has a hierarchy with three layers is fairly straightforward. You can also migrate an existing network into a hierarchical design. The resulting network is organized, efficient, and predictable. However, a simple hierarchical design does not address other best practices like redundancy, in the case where a switch or a link fails, or scalability, when large additions to the network need to be added.

Consider the hierarchical network shown in the left portion of Figure 1-8. Each layer of the network is connected to the adjacent layer by single links. If a link fails, a significant portion of the network will become isolated. In addition, the access layer switches are aggregated into a single distribution layer switch. If that switch fails, all the users will become isolated.

Image

Figure 1-8 Improving Availability in the Distribution and Access Layers

To mitigate a potential distribution switch failure, you can add a second, redundant distribution switch. To mitigate a potential link failure, you can add redundant links from each access layer switch to each distribution switch. These improvements are shown on the right in Figure 1-8.

One weakness is still present in the redundant design of Figure 1-8: The core layer has only one switch. If that core switch fails, users in the access layer will still be able to communicate with each other. However, they will not be able to reach other areas of the network, such as a data center, the Internet, and so on. To mitigate the effects of a core switch failure, you can add a second, redundant core switch, as shown in Figure 1-9. Redundant links should also be added between each distribution layer switch and each core layer switch.

Image

Figure 1-9 Fully Redundant Hierarchical Network Design

The redundancy needed for the small network shown in Figure 1-9 is fairly straightforward. As the network grows and more redundant switches and redundant links are added into the design, the design can become confusing. For example, suppose many more access layer switches need to be added to the network of Figure 1-9 because several departments of users have moved into the building or into an adjacent building. Should the new access layer switches be dual-connected into the same two distribution switches? Should new distribution switches be added, too? If so, should each of the distribution switches be connected to every other distribution and every other core switch, creating a fully meshed network?

Figure 1-10 shows one possible network design that might result. With so many interconnecting links between switches, it becomes a “brain-buster” exercise to figure out where VLANs are trunked, what the spanning-tree topologies look like, which links should have Layer 3 connectivity, and so on. Users might have connectivity through this network, but it might not be clear how they are actually working or what has gone wrong if they are not working. This network looks more like a spider’s web than an organized, streamlined design.

Image

Figure 1-10 Network Growth in a Disorganized Fashion

To maintain organization, simplicity, and predictability, you can design a campus network in a logical manner, using a modular approach. In this approach, each layer of the hierarchical network model can be broken into basic functional units. These units, or modules, can then be sized appropriately and connected, while allowing for future scalability and expansion.

You can divide enterprise campus networks into the following basic elements or building blocks:

Image Switch block: A group of access layer switches, together with their distribution switches. This is also called an access distribution block, named for the two switch layers that it contains. The dashed rectangle in Figures 1-8 through 1-10 represent typical switch blocks.

Image Core: The campus network’s backbone, which connects all switch blocks.

Image

Other related elements can exist. Although these elements do not contribute to the campus network’s overall function, they can be designed separately and added to the network design. For example, a data center containing enterprise resources or services can have its own access and distribution layer switches, forming a switch block that connects into the core layer. In fact, if the data center is very large, it might have its own core switches, too, which connect into the normal campus core. Recall how a campus network is divided into access, distribution, and core layers. The switch block contains switching devices from the access and distribution layers. The switch block then connects into the core layer, providing end-to-end connectivity across the campus. As the network grows, you can add new access layer switches by connecting them into an existing pair of distribution switches, as shown in Figure 1-11. You could also add a completely new access distribution switch block that contains the areas of new growth, as shown in Figure 1-12.

Image

Figure 1-11 Network Growth by Adding Access Switches to a Switch Block

Image

Figure 1-12 Network Growth by Adding New Switch Blocks

Sizing a Switch Block

Containing access and distribution layer devices, the switch block is simple in concept. You should consider several factors, however, to determine an appropriate size for the switch block. The range of available switch devices makes the switch block size very flexible. At the access layer, switch selection is usually based on port density or the number of connected users.

The distribution layer must be sized according to the number of access layer switches that are aggregated or brought into a distribution device. Consider the following factors:

Image Traffic types and patterns

Image Amount of Layer 3 switching capacity at the distribution layer

Image Total number of users connected to the access layer switches

Image Geographic boundaries of subnets or VLANs

Designing a switch block based solely on the number of users or stations contained within the block is usually inaccurate. Usually, no more than 2000 users should be placed within a single switch block. Although this is useful for initially estimating a switch block’s size, this idea doesn’t take into account the many dynamic processes that occur on a functioning network.

Instead, switch block size should be based primarily on the following:

Image Traffic types and behavior

Image Size and number of common workgroups

Because of the dynamic nature of networks, you can size a switch block too large to handle the load that is placed on it. Also, the number of users and applications on a network tends to grow over time. A provision to break up or downsize a switch block might be necessary as time passes. Again, base these decisions on the actual traffic flows and patterns present in the switch block. You can estimate, model, or measure these parameters with network-analysis applications and tools.


Note

The actual network-analysis process is beyond the scope of this book. Traffic estimation, modeling, and measurement are complex procedures, each requiring its own dedicated analysis tool.


Generally, a switch block is too large if the following conditions are observed:

Image The routers (multilayer switches) at the distribution layer become traffic bottlenecks. This congestion could be because of the volume of inter-VLAN traffic, intensive CPU processing, or switching times required by policy or security functions (access lists, queuing, and so on).

Image Broadcast or multicast traffic slows the switches in the switch block. Broadcast and multicast traffic must be replicated and forwarded out many ports simultaneously. This process requires some overhead in the multilayer switch, which can become too great if significant traffic volumes are present.

Switch Block Redundancy

In any network design, the potential always exists for some component to fail. For example, if an electrical circuit breaker is tripped or shuts off, a switch might lose power. A better design is to use a switch that has two independent power supplies. Each power supply could be connected to two power sources so that one source is always likely to be available to power the switch. In a similar manner, a single switch might have an internal problem that causes it to fail. A single link might go down because a media module fails, a fiber-optic cable gets cut, and so on. To design a more resilient network, you can implement most of the components in redundant pairs.

Image

A switch block consists of two distribution switches that aggregate one or more access layer switches. Each access layer switch should have a pair of uplinks—one connecting to each distribution switch. The physical cabling is easy to draw, but the logical connectivity is not always obvious. For example, Figure 1-13 shows a switch block that has a single VLAN A that spans multiple access switches. You might find this where there are several separate physical switch chassis in an access layer room, or where two nearby communications rooms share a common VLAN. Notice from the shading how the single VLAN spans across every switch (both access and distribution) and across every link connecting the switches. This is necessary for the VLAN to be present on both access switches and to have redundant uplinks for high availability.

Image

Figure 1-13 A Redundant Switch Block Design

Although this design works, it is not optimal. VLAN A must be carried over every possible link within the block to span both access switches. Both distribution switches must also support VLAN A because they provide the Layer 3 router function for all hosts on the VLAN. The two distribution switches can use one of several redundant gateway protocols to provide an active IP gateway and a standby gateway at all times. These protocols require Layer 2 connectivity between the distribution switches and are discussed in Chapter 18, “Layer 3 High Availability.”

Notice how the shaded links connect to form two triangular loops. Layer 2 networks cannot remain stable or usable if loops are allowed to form, so some mechanism must be used to detect the loops and keep the topology loop free.

In addition, the looped topology makes the entire switch block a single failure domain. If a host in VLAN A misbehaves or generates a tremendous amount of broadcast traffic, all the switches and links in the switch block could be negatively impacted.

A better design works toward keeping the switch block inherently free of Layer 2 loops. As Figure 1-14 shows, a loop-free switch block requires a unique VLAN on each access switch. In other words, VLANs are not permitted to span across multiple access switches. The extent of each VLAN, as shown by the shaded areas, becomes a V shape rather than a closed triangular loop.

Image

Figure 1-14 Best Practice Loop-Free Switch Block Topology

Image

The boundary between Layers 2 and 3 remains the same. All Layer 2 connectivity is contained within the access layer, and the distribution layer has only Layer 3 links. Without any potential Layer 2 loops, the switch block can become much more stable and much less reliant on any mechanisms to detect and prevent loops. Also, because each access switch has two dedicated paths into the distribution layer, both links can be fully utilized with traffic load balanced across them. In turn, each Layer 3 distribution switch can load balance traffic over its redundant links into the core layer using routing protocols.

It is also possible to push the Layer 3 boundary from the distribution layer down into the access layer, as long as the access switches can support routing functions. Figure 1-15 illustrates this design. Because Layer 3 links are used throughout the switch block, network stability is offered through the fast convergence of routing protocols and updates. Routing can also load balance packets across the redundant uplinks, making full use of every available link between the network layers.

Image

Figure 1-15 A Completely Routed Switch Block

You should become familiar with a few best practices that can help with a redundant hierarchical network design:

Image Design each layer with pairs of switches.

Image Connect each switch to the next higher layer with two links for redundancy.

Image Connect each pair of distribution switches with a link, but do not connect the access layer switches to each other (unless the access switches support some other means to function as one logical stack or chassis).

Image Do not extend VLANs beyond distribution switches. The distribution layer should always be the boundary of VLANs, subnets, and broadcasts. Although Layer 2 switches can extend VLANs to other switches and other layers of the hierarchy, this activity is discouraged. VLAN traffic should not traverse the network core.

Network Core

A core layer is required to connect two or more switch blocks in a campus network. Because all traffic passing to and from all switch blocks must cross the core, the core layer must be as efficient and resilient as possible. The core is the campus network’s basic foundation and carries much more traffic than any other switch block.

Recall that both the distribution and core layers provide Layer 3 functionality. Preferably, the links between distribution and core layer switches should be Layer 3 routed interfaces. You can also use Layer 2 links that carry a small VLAN bounded by the two switches. In the latter case, a Layer 3 switch virtual interface (SVI) is used to provide routing within each small VLAN.

The links between layers should be designed to carry the amount of traffic load handled by the distribution switches, at a minimum. The links between core switches should be of sufficient size to carry the aggregate amount of traffic coming into one of the core switches. Consider the average link utilization, but allow for future growth. An Ethernet core allows simple and scalable upgrades of magnitude; consider the progression from Gigabit Ethernet to 10-Gigabit Ethernet (10GE), and so on.

Image

A core should consist of two multilayer switches that connect two or more switch blocks in a redundant fashion. A redundant core is sometimes called a dual core because it is usually built from two identical switches. Figure 1-16 illustrates the core. Notice that this core appears as an independent module and is not merged into any other block or layer.

Image

Figure 1-16 A Redundant Core Layer

Redundant links connect each switch block’s distribution layer portion to each of the dual core switches. The two core switches connect by a common link.

With a redundant core, each distribution switch has two equal-cost paths into the core, allowing the available bandwidth of both paths to be used simultaneously. Both paths remain active because the distribution and core layers use Layer 3 devices that can manage equal-cost paths in routing tables. The routing protocol in use determines the availability or loss of a neighboring Layer 3 device. If one switch fails, the routing protocol reroutes traffic using an alternative path through the remaining redundant switch.

If the campus network continues to grow to the point that it spans two large buildings or two large locations, the core layer can be replicated, as shown in Figure 1-17. Notice how the two-node redundant core has been expanded to include four core switches. This is known as a multinode core. Each of the four core switches is connected to the other core switches to form a fully meshed core layer.

Image

Figure 1-17 Using a Multi-Node Core in a Very Large Campus Network

Even though the multinode core is fully meshed, the campus network is still divided across the two pairs of core switches. Each switch block has redundant connections to only one core pair—not to all of the core switches.

Collapsed Core

Should all networks have a distinct redundant core layer? Perhaps not, in smaller campus networks, where the cost and scalability of a separate core layer is not warranted. A collapsed core block is one in which the hierarchy’s core layer is collapsed into the distribution layer. Here, both distribution and core functions are provided within the same switch devices.

Figure 1-18 shows the basic collapsed core design. Although the distribution and core layer functions are performed in the same device, keeping these functions distinct and properly designed is important. Note also that the collapsed core is not an independent building block but is integrated into the distribution layer of the individual standalone switch blocks.

Image

Figure 1-18 A Collapsed Core Network Design

In the collapsed core design, each access layer switch has a redundant link to each distribution layer switch. All Layer 3 subnets present in the access layer terminate at the distribution switches’ Layer 3 ports, as in the basic switch block design. The distribution switches connect to each other with redundant links, completing a path to use during a failure.

Core Size in a Campus Network

The core layer is made up of redundant switches and is bounded and isolated by Layer 3 devices. Routing protocols determine paths and maintain the core’s operation. As with any network, you must pay some attention to the overall design of the routers and routing protocols in the network. Because routing protocols propagate updates throughout the network, network topologies might be undergoing change. The network’s size (the number of routers) then affects routing protocol performance as updates are exchanged and network convergence takes place.

Although the network shown previously in Figure 1-16 might look small, with only two switch blocks of two Layer 3 switches (route processors within the distribution layer switches) each, large campus networks can have many switch blocks connected into the core. If you think of each multilayer switch as a router, you will recall that each route processor must communicate with and keep information about each of its directly connected peers. Most routing protocols have practical limits on the number of peer routers that can be directly connected on a point-to-point or multiaccess link. In a network with a large number of switch blocks, the number of connected routers can grow quite large. Should you be concerned about a core switch peering with too many distribution switches?

No, because the actual number of directly connected peers is quite small, regardless of the campus network size. Access layer VLANs terminate at the distribution layer switches (unless the access layer is configured for Layer 3 operation). The only peering routers at that boundary are pairs of distribution switches, each providing routing redundancy for each of the access layer VLAN subnets. At the distribution and core boundary, each distribution switch connects to only two core switches over Layer 3 switch interfaces. Therefore, only pairs of router peers are formed.

When multilayer switches are used in the distribution and core layers, the routing protocols running in both layers regard each pair of redundant links between layers as equal-cost paths. Traffic is routed across both links in a load-sharing fashion, utilizing the bandwidth of both.

One final core layer design point is to scale the core switches to match the incoming load. At a minimum, each core switch must handle switching each of its incoming distribution links at 100 percent capacity.

Cisco Products in a Hierarchical Network Design

Before delving into the design practices needed to build a hierarchical campus network, you should have some idea of the actual devices that you can place at each layer. Cisco has switching products tailored for layer functionality and for the size of the campus network.

For the purposes of this discussion, a large campus can be considered to span across many buildings. A medium campus might make use of one or several buildings, and a small campus might have only a single building.

Choose your Cisco products based on the functionality that is expected at each layer of a small, medium, or large campus. Do not get lost in the details of the tables. Rather, try to understand which switch fits into which layer for a given network size.

In the access layer, high port density, Power over Ethernet (PoE), and low cost are usually desirable. The Catalyst 2960-X, 3650, and 3850 switches provide 48 ports each. Like switch models can be connected to form a single logical switch when a greater number of ports is needed. The Catalyst 4500E is a single-switch chassis that can be populated with a variety of line cards. It also offers a choice of redundant supervisor modules that offer redundancy and even the ability to perform software upgrades with no impact to the production network. Table 1-3 describes some Cisco switch platforms that are commonly used in the access layer.

Image

Image

Table 1-3 Common Access Layer Switch Platforms

The distribution and core layers are very similar in function and switching features. Generally, these layers require high Layer 3 switching throughput and a high density of high-bandwidth optical media. Cisco offers the Catalyst 3750-X, 4500-X, 4500E, and 6800, as summarized in Table 1-4.

Image

Table 1-4 Common Distribution and Core Layer Switch Platforms

Exam Preparation Tasks

Review All Key Topics

Review the most important topics in the chapter, noted with the Key Topic icon in the outer margin of the page. Table 1-5 lists a reference of these key topics and the page numbers on which each is found.

Image

Image

Table 1-5 Key Topics for Chapter 1

Complete Tables and Lists from Memory

There are no memory tables in this chapter.

Define Key Terms

Define the following key terms from this chapter, and check your answers in the glossary:

hierarchical network design

access layer

distribution layer

core layer

switch block

collapsed core

dual core