Ethernet Technology - CompTIA Network+ N10-006 Cert Guide (2015)

CompTIA Network+ N10-006 Cert Guide (2015)

Chapter 4. Ethernet Technology

After completion of this chapter, you will be able to answer the following questions:

Image What are the characteristics of Ethernet networks, in terms of media access, collisions domains, broadcast domains, and distance/speed limitations of various Ethernet standards?

Image What functions are performed by Ethernet switch features, such as VLANs, trunks, Spanning Tree Protocol, link aggregation, Power over Ethernet, port monitoring, user authentication, and first-hop redundancy?

Odds are, when you are working with local-area networks (LANs), you are working with Ethernet as the Layer 1 technology. Back in the mid 1990s, there was tremendous competition between technologies such as Ethernet, Token Ring, and Fiber Distributed Data Interface (FDDI). Today, however, we can see that Ethernet is the clear winner of those Layer 1 wars.

Of course, over the years, Ethernet has evolved. Several Ethernet standards exist in modern LANs, with a variety of distance and speed limitations. This chapter begins by reviewing the fundamentals of Ethernet networks, including a collection of Ethernet speeds and feeds.

Chapter 3, “Network Components,” introduced you to Ethernet switches. Because these switches are such an integral part of LANs, this chapter delves into many of the features offered by some Ethernet switches.

Foundation Topics

Principles of Ethernet

The genesis of Ethernet was 1972, when this technology was developed by Xerox Corporation. The original intent was to create a technology to allow computers to connect with laser printers. A quick survey of most any corporate network reveals that Ethernet rose well beyond its humble beginnings, with Ethernet being used to interconnect such devices as computers, printers, wireless access points, servers, switches, routers, video-game systems, and more. This section discusses early Ethernet implementations and limitations and references up-to-date Ethernet throughput and distance specifications.

Ethernet Origins

In the network-industry literature, you might come upon the term IEEE 802.3 (where IEEE refers to the Institute of Electrical and Electronics Engineers standards body). In general, you can use the term IEEE 802.3 interchangeably with the term Ethernet. However, be aware that these technologies have some subtle distinctions. For example, an Ethernet frame is a fixed-length frame, whereas an 802.3 frame length can vary.

A popular implementation of Ethernet, in the early days, was called 10BASE5. The 10 in 10BASE5 referred to network throughput, specifically 10 Mbps (that is, 10 million [mega] bits per second). The BASE in 10BASE5 referred to baseband, as opposed to broadband, as discussed in Chapter 2, “The OSI Reference Model.” Finally, the 5 in 10BASE5 indicated the distance limitation of 500 meters. The cable used in 10BASE5 networks, as shown in Figure 4-1, was a larger diameter than most types of media. In fact, this network type became known as thicknet.

Image

Figure 4-1 10BASE5 Cable

Another early Ethernet implementation was 10BASE2. From the previous analysis of 10BASE5, you might conclude that 10BASE2 was a 10-Mbps baseband technology with a distance limitation of 200 meters. That is almost correct. However, 10BASE2’s actual distance limitation was 185 meters. The cabling used in 10BASE2 networks was significantly thinner and therefore less expensive than 10BASE5 cabling. As a result, 10BASE2 cabling, as shown in Figure 4-2, was known as thinnet or cheapernet.

Image

Figure 4-2 Coaxial Cable Used for 10BASE2

10BASE5 and 10BASE2 networks are rarely, if ever, seen today. Other than their 10-Mbps bandwidth limitation, the cabling used by these legacy technologies quickly faded in popularity with the advent of unshielded twisted-pair cabling (UTP), as discussed in Chapter 2. The 10-Mbps version of Ethernet that relied on UTP cabling, an example of which is provided in Figure 4-3, is known as 10BASE-T, where the T in 10BASE-T refers to twisted-pair cabling.

Image

Figure 4-3 UTP Cable Used for 10BASE-T

Carrier Sense Multiple Access Collision Detect

Ethernet was based on the philosophy that all networked devices should be eligible, at any time, to transmit on a network. This school of thought is in direct opposition to technologies such as Token Ring, which boasted a deterministic media access approach. Specifically, Token Ring networks passed a token around a network in a circular fashion, from one networked device to the next. Only when a networked device was in possession of that token was it eligible to transmit on the network.

Recall from Chapter 1, “Computer Network Fundamentals,” the concept of a bus topology. An example of a bus topology is a long cable (such as thicknet or thinnet) running the length of a building, with various networked devices tapping into that cable to gain access to the network.

Consider Figure 4-4, which depicts an Ethernet network using a shared bus topology.

Image

Figure 4-4 Ethernet Network Using a Shared Bus Topology

In this topology, all devices are directly connected to the network and are free to transmit at any time, if they have reason to believe no other transmission currently exists on the wire. Ethernet permits only a single frame to be on a network segment at any one time. So, before a device in this network transmits, it listens to the wire to see if there is currently any traffic being transmitted. If no traffic is detected, the networked device transmits its data. However, what if two devices simultaneously had data to transmit? If they both listen to the wire at the same time, they could simultaneously, and erroneously, conclude that it is safe to send their data. However, when both devices simultaneously send their data, a collision occurs. A collision, as depicted in Figure 4-5, results in data corruption.

Image

Figure 4-5 Collision on an Ethernet Segment

Fortunately, Ethernet was designed with a mechanism to detect collisions and allow the devices whose transmissions collided to retransmit their data at different times. Specifically, after the devices notice that a collision occurred, they independently set a random back off timer. Each device waits for this random amount of time to elapse before again attempting to transmit. Here’s the logic: Because each device almost certainly picked a different amount of time to back off from transmitting, their transmissions should not collide the next time these devices transmit, as illustrated in Figure 4-6.

Image

Figure 4-6 Recovering from a Collision with Random Backoff Timers

The procedure used by Ethernet to determine whether it is safe to transmit, detect collisions, and retransmit if necessary is called carrier sense multiple access collision detect (CSMA/CD).

Let’s break CSMA/CD down into its constituent components:

Image

Image Carrier sense: A device attached to an Ethernet network can listen to the wire, prior to transmitting, to make sure that a frame is not currently being transmitted on the network segment.

Image Multiple access: Unlike a deterministic method of network access (for example, the method used by Token Ring), all Ethernet devices simultaneously have access to an Ethernet segment.

Image Collision detect: If a collision occurs (perhaps because two devices were simultaneously listening to the network and simultaneously concluded that it was safe to transmit), Ethernet devices can detect that collision and set random back off timers. After each device’s random timer expires, each device again attempts to transmit its data.

Even with Ethernet’s CSMA/CD feature, Ethernet segments still suffer from scalability limitations. Specifically, the likelihood of collisions increases as the number of devices on a shared Ethernet segment increases. CSMA/CA refers to using collision avoidance, which is common in wireless networks.

Regarding wired Ethernet, devices on a shared Ethernet segment are said to belong to the same collision domain. One example of a shared Ethernet segment is a 10BASE5 or 10BASE2 network with multiple devices attaching to the same cable. On that cable, only one device can transmit at any one time. Therefore, all devices attached to the thicknet or thinnet cable are in the same collision domain.

Similarly, devices connected to an Ethernet hub are, as shown in Figure 4-7, in the same collision domain. As described in Chapter 3, a hub is considered to be a Layer 1 device and does not make forwarding decisions. Instead, a hub takes bits in on one port and sends them out all the other hub ports except the one on which the bits were received.

Image

Figure 4-7 Shared Ethernet Hub: One Collision Domain

Ethernet switches, an example of which is presented in Figure 4-8, dramatically increase the scalability of Ethernet networks by creating multiple collision domains. In fact, every port on an Ethernet switch is in its own collision domain.

Image

Figure 4-8 Ethernet Switch: One Collision Domain per Port

A less-obvious but powerful benefit also accompanies Ethernet switches. Because a switch port is connecting to a single device, there is no chance of having a collision. With no chance of collision, collision detection is no longer needed. With collision detection disabled, network devices can operate in full-duplex mode rather than half-duplex mode. In full-duplex mode a device can simultaneously send and receive at the same time.

When multiple devices are connected to the same shared Ethernet segment such as a Layer 1 hub, CSMA/CD must be enabled. As a result, the network must operate in half-duplex mode, which means that only a single networked device can transmit or receive at any one time. In half-duplex mode, a networked device cannot simultaneously transmit and receive, which is an inefficient use of a network’s bandwidth.

Distance and Speed Limitations

To understand the bandwidth available on networks, we need to define a few terms. You should already know that a bit refers to one of two possible values. These values are represented using binary math, which uses only the numbers 0 and 1. On a cable such as twisted-pair cable, a bit could be represented by the absence or presence of voltage. Fiber-optic cables, however, might represent a bit with the absence or presence of light.

The bandwidth of a network is measured in terms of how many bits the network can transmit during a 1-second period of time. For example, if a network has the capacity to transmit 10,000,000 (that is, 10 million) bits in a 1-second period of time, the bandwidth capacity is said to be 10 Mbps, where Mbps refers to megabits (that is, millions of bits) per second. Table 4-1 defines common bandwidths supported on various types of Ethernet networks.

Image

Image

Table 4-1 Ethernet Bandwidth Capacities

The type of cabling used in your Ethernet network influences the bandwidth capacity and the distance limitation of your network. For example, fiber-optic cabling often has a higher bandwidth capacity and a longer distance limitation than twisted-pair cabling.

Recall from Chapter 3 the contrast of single-mode fiber (SMF) to multimode fiber (MMF). Because of the issue of multimode delay distortion, SMF usually has a longer distance limitation than MMF.

When you want to uplink one Ethernet switch to another, you might need different connectors (for example, MMF, SMF, or UTP) for different installations. Fortunately, some Ethernet switches have one or more empty slots in which you can insert a gigabit interface converter (GBIC). GBICs are interfaces that have a bandwidth capacity of 1 Gbps and are available with MMF, SMF, or UDP connectors. This allows you to have flexibility in the uplink technology you use in an Ethernet switch.


Note

A variant of a regular GBIC, which is smaller, is the small form-factor pluggable (SFP), which is sometimes called a mini-GBIC.


Although not comprehensive, Table 4-2 offers a listing of multiple Ethernet standards, along with their media type, bandwidth capacity, and distance limitation.

Image

Image

Table 4-2 Types of Ethernet


Note

Two often-confused terms are 100BASE-T and 100BASE-TX. 100BASE-T itself is not a specific standard. Rather, 100BASE-T is a category of standards and includes 100BASE-T2 (which uses two pairs of wires in a Cat 3 cable), 100BASE-T4 (which uses four pairs of wires in a Cat 3 cable), and 100BASE-TX. 100BASE-T2 and 100BASE-T4 were early implementations of 100-Mbps Ethernet and are no longer used. Therefore, you can generally use the 100BASE-T and 100BASE-TX terms interchangeably.

Similarly, the term 1000BASE-X is not a specific standard. Rather, 1000BASE-X refers to all Ethernet technologies that transmit data at a rate of 1 Gbps over fiber-optic cabling. Additional and creative ways of using Ethernet technology include IEEE 1901-2013, which could be used for Ethernet over HDMI cables and Ethernet over existing power lines to avoid having to run a separate cabling just for networking.


Ethernet Switch Features

Chapter 3 delved into the operation of Layer 2 Ethernet switches (which we generically refer to as switches). You read an explanation of how a switch learns which Media Access Control (MAC) addresses reside off of which ports and an explanation of how a switch makes forwarding decisions based on destination MAC addresses.

Beyond basic frame forwarding, however, many Layer 2 Ethernet switches offer a variety of other features to enhance such things as network performance, redundancy, security, management, flexibility, and scalability. Although the specific features offered by a switch vary, this section introduces you to some of the more common features found on switches.

Virtual LANs

In a basic switch configuration, all ports on a switch belong to the same broadcast domain, as explained in Chapter 3. In such an arrangement, a broadcast received on one port gets forwarded out all other ports.

Also, from a Layer 3 perspective, all devices connected in a broadcast domain have the same network address. Chapter 5, “IPv4 and IPv6 Addresses,” gets into the binary math behind the scenes of how networked devices can be assigned an IP address (that is, a logical Layer 3 address). A portion of that address is the address of the network to which that device is attached. The remaining portion of that address is the address of the device itself. Devices that have the same network address are said to belong to the same network, or subnet.

Imagine that you decide to place PCs from different departments within a company into their own subnet. One reason you might want to do this is for security purposes. For example, by having the Accounting department in a separate subnet (that is, a separate broadcast domain) than the Sales department, devices in one subnet will not see the broadcasts being sent on the other subnet.

A design challenge might be that PCs belonging to these departments are scattered across multiple floors in a building. Consider Figure 4-9 as an example. The Accounting and Sales departments each have a PC on both floors of a building. Because the wiring for each floor runs back to a wiring closet on that floor, to support these two subnets using a switch’s default configuration, you would have to install two switches on each floor. For traffic to travel from one subnet to another subnet, that traffic has to be routed, meaning that a device such as a multilayer switch or a router forwards traffic based on a packet’s destination network addresses. So, in this example, the Accounting switches are interconnected and then connect to a router, and the Sales switches are connected similarly.

Image

Figure 4-9 Example: All Ports on a Switch Belonging to the Same Subnet

The design presented lacks efficiency, in that you have to install at least one switch per subnet. A more efficient design would be to logically separate a switch’s ports into different broadcast domains. Then, in the example, an Accounting department PC and a Sales department PC could connect to the same switch, even though those PCs belong to different subnets. Fortunately, virtual LANs (VLANs) make this possible.

With VLANs, as illustrated in Figure 4-10, a switch can have its ports logically divided into more than one broadcast domain (that is, more than one subnet or VLAN). Then, devices that need to connect to those VLANs can connect to the same physical switch, yet logically be separate from one another.

Image

Figure 4-10 Example: Ports on a Switch Belonging to Different VLANs

One challenge with VLAN configuration in large environments is the need to configure identical VLAN information on all switches. Manually performing this configuration is time consuming and error prone. However, switches from Cisco Systems support VLAN Trunking Protocol (VTP), which allows a VLAN created on one switch to be propagated to other switches in a group of switches (that is, a VTP domain). VTP information is carried over a trunk connection, which is discussed next.

Switch Configuration for an Access Port

Configurations used on a switch port may vary, based on the manufacturer of the switch. Example 4-1 shows a sample configuration on an access port (no trunking) on a Cisco Catalyst switch. Lines with a leading ! are being used to document the next line(s) of the configuration.

Example 4-1 Switch Access Port Configuration

Click here to view code image


! Move into configuration mode for interface gig 0/21
SW1(config)# interface GigabitEthernet0/21

! Add a text description of what the port is used for
SW1(config-if)# description Access port in Sales VLAN 21

! Define the port as an access port, and not a trunk port
SW1(config-if)# switchport mode access

! Assign the port to VLAN 21
SW1(config-if)# switchport access vlan 21

! Enable port security
SW1(config-if)# switchport port-security

! Control the number of MAC addresses the switch may learn
! from device(s) connected to this switch port
SW1(config-if)# switchport port-security maximum 5

! Restrict any frames from MAC addresses above the 5 allowed
SW1(config-if)# switchport port-security violation restrict

! Set the speed to 1,000 Mbps (1 Gigabit per second)
SW1(config-if)# speed 1000

! Set the duplex to full
SW1(config-if)# duplex full

! Configure the port to begin forwarding without waiting the
! standard amount of time normally set by Spanning Tree Protocol
SW1(config-if)# spanning-tree portfast


Trunks

One challenge with carving a switch up into multiple VLANs is that several switch ports (that is, one port per VLAN) could be consumed to connect a switch back to a router. A more efficient approach is to allow traffic for multiple VLANs to travel over a single connection, as shown in Figure 4-11. This type of connection is called a trunk.

Image

Figure 4-11 Example: Trunking Between Switches

The most popular trunking standard today is IEEE 802.1Q, which is often referred to as dot1q. One of the VLANs traveling over an 802.1Q trunk is called a native VLAN. Frames belonging to the native VLAN are sent unaltered over the trunk (no tags). However, to distinguish other VLANs from one another, the remaining VLANs are tagged.

Specifically, a nonnative VLAN has four tag bytes (where a byte is a collection of 8 bits) added to the Ethernet frame. Figure 4-12 shows the format of an IEEE 802.1Q header with these 4 bytes.

Image

Image

Figure 4-12 IEEE 8021Q Header

One of these bytes contains a VLAN field. That field indicates to which VLAN a frame belongs. The devices (for example, a switch, a multilayer switch, or a router) at each end of a trunk interrogate that field to determine to which VLAN an incoming frame is associated. As you can see by comparing Figures 4-9, 4-10, and 4-11, VLAN and trunking features allow switch ports to be used far more efficiently than merely relying on a default switch configuration.

Switch Configuration for a Trunk Port

Example 4-2 shows a sample configuration on a trunk port on a Cisco Catalyst switch. Lines with a leading ! are being used to document the next line(s) of the configuration.

Example 4-2 Sample Trunk Port Configuration

Click here to view code image


! Go to interface config mode for interface Gig 0/22
SW1(config)# interface GigabitEthernet0/22

! Add a text description
SW1(config-if)# description Trunk to another switch

! Specify that this is a trunk port
SW1(config-if)# switchport mode trunk

! Specify the trunking protocol to use
SW1(config-if)# switchport trunk encapsulation dot1q

! Specify the native VLAN to use for un-tagged frames
SW1(config-if)# switchport trunk native vlan 5

! Specify which VLANs are allowed to go on the trunk
SW1(config-if)# switchport trunk allowed vlan 1-50


Spanning Tree Protocol

Administrators of corporate telephone networks often boast about their telephone system (that is, a private branch exchange [PBX] system) having the five nines of availability. If a system has five nines of availability, it is up and functioning 99.999 percent of the time, which translates to only about 5 minutes of downtime per year.

Traditionally, corporate data networks struggled to compete with corporate voice networks, in terms of availability. Today, however, many networks that traditionally carried only data now carry voice, video, and data. Therefore, availability becomes an even more important design consideration.

To improve network availability at Layer 2, many networks have redundant links between switches. However, unlike Layer 3 packets, Layer 2 frames lack a Time-to-Live (TTL) field. As a result, a Layer 2 frame can circulate endlessly through a looped Layer 2 topology. Fortunately, IEEE 802.1D Spanning Tree Protocol (STP) allows a network to physically have Layer 2 loops while strategically blocking data from flowing over one or more switch ports to prevent the looping of traffic.

In the absence of STP, if we have parallel paths, two significant symptoms include corruption of a switch’s MAC address table and broadcast storms, where frames loop over and over throughout our switched network. An enhancement to the original STP protocol is called 802.1w and is called Rapid Spanning Tree because it does a quicker job of adjusting to network conditions, such as the addition to or removal of Layer 2 links in the network.

Shortest Path Bridging (IEEE 802.1aq/SPB) is a protocol that is more scalable in larger environments (hundreds of switches interconnected) compared to STP.

Corruption of a Switch’s MAC Address Table

As described in Chapter 3, a switch’s MAC address table can dynamically learn what MAC addresses are available off of its ports. However, in the event of an STP failure, a switch’s MAC address table can become corrupted. To illustrate, consider Figure 4-13.

Image

Figure 4-13 MAC Address Table Corruption

PC1 is transmitting traffic to PC2. When the frame sent from PC1 is transmitted on segment A, the frame is seen on the Gig 0/1 ports of switches SW1 and SW2, causing both switches to add an entry to their MAC address tables associating a MAC address of AAAA.AAAA.AAAA with port Gig 0/1. Because STP is not functioning, both switches then forward the frame out on segment B. As a result, PC2 receives two copies of the frame. Also, switch SW1 sees the frame forwarded out of switch SW2’s Gig 0/2 port. Because the frame has a source MAC address of AAAA.AAAA.AAAA, switch SW1 incorrectly updates its MAC address table, indicating that a MAC address of AAAA.AAAA.AAAA resides off of port Gig 0/2. Similarly, switch SW2 sees the frame forwarded on to segment B by switch SW1 on its Gig 0/2 port. Therefore, switch SW2 also incorrectly updates its MAC address table.

Broadcast Storms

As previously mentioned, when a switch receives a broadcast frame (that is, a frame destined for a MAC address of FFFF.FFFF.FFFF), the switch floods the frame out of all switch ports, other than the port on which the frame was received. Because a Layer 2 frame does not have a TTL field, a broadcast frame endlessly circulates through the Layer 2 topology, consuming resources on both switches and attached devices (for example, user PCs).

Figure 4-14 and the following list illustrate how a broadcast storm can form in a Layer 2 topology when STP is not functioning correctly.

Image

Figure 4-14 Broadcast Storm

Image

1. PC1 sends a broadcast frame on to segment A, and the frame enters each switch on port Gig 0/1.

2. Both switches flood a copy of the broadcast frame out of their Gig 0/2 ports (that is, on to segment B), causing PC2 to receive two copies of the broadcast frame.

3. Both switches receive a copy of the broadcast frame on their Gig 0/2 ports (that is, from segment B) and flood the frame out of their Gig 0/1 ports (that is, on to segment A), causing PC1 to receive two copies of the broadcast frame.

This behavior continues as the broadcast frame copies continue to loop through the network. The performance of PC1 and PC2 is impacted because they also continue to receive copies of the broadcast frame.

STP Operation

STP prevents Layer 2 loops from occurring in a network because such an occurrence might result in a broadcast storm or corruption of a switch’s MAC address table. Switches in an STP topology are classified as one of the following:

Image

Image Root bridge: A switch elected to act as a reference point for a spanning tree. The switch with the lowest bridge ID (BID) is elected as the root bridge. The BID is made up of a priority value and a MAC address.

Image Nonroot bridge: All other switches in the STP topology are considered to be nonroot bridges.

Figure 4-15 illustrates the root bridge election in a network. Notice that because the bridge priorities are both 32768, the switch with the lowest MAC address (that is, SW1) is elected as the root bridge.

Image

Figure 4-15 Root Bridge Election

Ports that interconnect switches in an STP topology are categorized as one of the port types described in Table 4-3.

Image

Image

Table 4-3 STP Port Types

Figure 4-16 illustrates these port types. Notice the root port for switch SW2 is selected based on the lowest port ID because the costs of both links are equal. Specifically, each link has a cost of 19, because both links are Fast Ethernet links.

Image

Figure 4-16 Identifying STP Port Roles

Figure 4-17 shows a similar topology to Figure 4-16. In Figure 4-17, however, the top link is running at a speed of 10 Mbps, whereas the bottom link is running at a speed of 100 Mbps. Because switch SW2 seeks to get back to the root bridge (that is, switch SW1) with the least cost, port Gig 0/2 on switch SW2 is selected as the root port.

Image

Figure 4-17 STP with Different Port Costs

Specifically, port Gig 0/1 has a cost of 100, and Gig 0/2 has a cost of 19. Table 4-4 shows the port costs for various link speeds.

Image

Image

Table 4-4 STP Port Cost


Note

A new standard for STP port costs, called long STP, will be increasingly adopted over the coming years because of link speeds exceeding 10 Gbps. Long STP values range from 2,000,000 for 10-Mbps Ethernet to as little as 2 for 10 Tbps (that is, 10 trillion [tera] bits per second).


Nondesignated ports do not forward traffic during normal operation but do receive bridge protocol data units (BPDUs). If a link in the topology goes down, the nondesignated port detects the link failure and determines whether it needs to transition to the forwarding state.

If a nondesignated port needs to transition to the forwarding state, it does not do so immediately. Rather, it transitions through the following states:

Image

Image Blocking: The port remains in the blocking state for 20 seconds by default. During this time, the nondesignated port evaluates BPDUs in an attempt to determine its role in the spanning tree.

Image Listening: The port moves from the blocking state to the listening state and remains in this state for 15 seconds by default. During this time, the port sources BPDUs, which inform adjacent switches of the port’s intent to forward data.

Image Learning: The port moves from the listening state to the learning state and remains in this state for 15 seconds by default. During this time, the port begins to add entries to its MAC address table.

Image Forwarding: The port moves from the learning state to the forwarding state and begins to forward frames.

Link Aggregation

If all ports on a switch are operating at the same speed (for example, 1 Gbps), the most likely ports to experience congestion are ports connecting to another switch or router. For example, imagine a wiring closet switch connected (via Fast Ethernet ports) to multiple PCs. That wiring closet switch has an uplink to the main switch for a building. Because this uplink port aggregates multiple 100-Mbps connections and the uplink port is also operating at 100 Mbps, it can quickly become congested if multiple PCs are transmitting traffic that needs to be sent over that uplink, as shown in Figure 4-18.

Image

Figure 4-18 Uplink Congestion

To help alleviate congested links between switches, you can (on some switch models) logically combine multiple physical connections into a single logical connection, over which traffic can be sent. This feature, as illustrated in Figure 4-19, is called link aggregation.

Image

Figure 4-19 Link Aggregation

Although vendor-proprietary solutions for link aggregation have existed for some time, a couple of common issues with some solutions included the following:

Image Each link in the logical bundle was a potential single point of failure.

Image Each end of the logical bundle had to be manually configured.

In 2000, the IEEE ratified the 802.3ad standard for link aggregation. The IEEE 802.3ad standard supports Link Aggregation Control Protocol (LACP). Unlike some of the older vendor proprietary solutions, LACP supports automatic configuration and prevents an individual link from becoming a single point of failure. Specifically, with LACP, if a link fails, that link’s traffic is forwarded over a different link. Groups of interfaces that make up an EtherChannel bundle are often referred to as a link aggregation group (LAG). Cisco Systems implementation is referred to as EtherChannel, and the terms LACP and EtherChannel are both commonly used. An EtherChannel group could be configured to act as a Layer 2 access port, and only support a single VLAN, or it could be configured to act as a Layer 2 802.1Q trunk to support multiple VLANs of the LAG. LAGs could also be configured as a Layer 3 routed interface if the switch supports that feature. In the case of a Layer 3 EtherChannel, an IP address would be applied to the logical interface that represents the LAG. Another term related to LACP and LAGs is port bonding, which is also referring to the same concept of grouping multiple ports and using them as a single logical interface.

LACP Configuration

Example 4-3 shows a sample configuration of LACP on a Cisco switch. Comment lines are preceded by an exclamation mark (!).

Example 4-3 LACP Configuration

Click here to view code image


! Move to interface that will be part of the LACP group
SW1(config)# interface GigabitEthernet0/16

! Assign this interface to the LACP group 1
SW1(config-if)# channel-group 1 mode active

! Move to the other interface(s) that will be part of
! the same group
SW1(config-if)# interface GigabitEthernet0/17
SW1(config-if)# channel-group 1 mode active

! Configure the group of interfaces as a logical group
! Configuration here will also apply the individual
! interfaces that are part of the group
SW1(config-if)# interface Port-channel 1

! Apply the configuration desired for the group
! LACP groups can be access or trunk ports depending
! on how the configuration of the logical port-channel interface
! In this example the LAG will be acting as a trunk
SW1(config-if)# switchport mode trunk
SW1(config-if)# switchport trunk encapsulation dot1q


Power over Ethernet

Some switches not only transmit data over a connected UTP cable, but they use that cable to provide power to an attached device. For example, imagine that you want to install a wireless access point (AP) mounted to a ceiling. Although no electrical outlet is available near the AP’s location, you can, as an example, run a Cat 5 UTP plenum cable above the drop ceiling and connect it to the AP. Some APs allow the switch at the other end of the UTP cable to provide power over the same wires that carry data. Examples of other devices that might benefit by receiving power from an Ethernet switch include security cameras and IP phones.

The switch feature that provides power to attached devices is called Power over Ethernet (PoE), and it is defined by the IEEE 802.3af standard. As shown in Figure 4-20, the PoE feature of a switch checks for 25k ohms (25,000 ohms) of resistance in the attached device. To check the resistance, the switch applies as much as 10V of direct current (DC) across specific pairs of wires (that is, pins 1 and 2 combine to form one side of the circuit, and pins 3 and 6 combine to form the other side of the circuit) connecting back to the attached device and checks to see how much current flows over those wires. For example, if the switch applied 10V DC across those wires and noticed 0.4 mA (milliamps) of current, the switch concludes the attached device had 25k ohms of resistance across those wires (based on the formula E = IR, where E represents current, I represents current, and R represents resistance). The switch could then apply power across those wires.

Image

Image

Figure 4-20 PoE

The next thing the switch must determine is how much power the attached device needs. The switch makes this determination by applying 15.5–20.5V DC (making sure that the current never exceeds 100 mA) to the attached device, for a brief period of time (less than one-tenth of a second). The amount of current flowing to the attached device tells the switch the power class of the attached device. The switch then knows how much power should be made available on the port connecting to the device requiring power, and it begins supplying an appropriate amount of voltage (in the range 44–57V) to the attached device.

The IEEE 803.af standard can supply a maximum of 15.4W (Watts) of power. However, a more recent standard, IEEE 802.3at, offers as much as 32.4W of power, enabling PoE to support a wider range of devices.

Port Monitoring

For troubleshooting purposes, you might want to analyze packets flowing over the network. To capture packets (that is, store a copy of packets on a local hard drive) for analysis, you could attach a network sniffer to a hub. Because a hub sends bits received on one port out all other ports, the attached network sniffer sees all traffic entering the hub.

Although several standalone network sniffers are on the market, a low-cost way to perform packet capture and analysis is to use software such as Wireshark (http://www.wireshark.org), as shown in Figure 4-21.

Image

Figure 4-21 Example: Wireshark Packet-Capture Software

A challenge arises, however, if you connect your network sniffer (for example, a laptop running the Wireshark software) to a switch port rather than a hub port. Because a switch, by design, forwards frames out ports containing the frames’ destination addresses, a network sniffer attached to one port would not see traffic destined for a device connected to a different port.

Consider Figure 4-22. Traffic enters a switch on port 1 and, based on the destination MAC addresses, exits via port 2. However, a network sniffer is connected to port 3 and is unable to see (and therefore capture) the traffic flowing between ports 1 and 2.

Image

Figure 4-22 Example: Network Sniffer Unable to Capture Traffic

Fortunately, some switches support a port mirroring feature, which makes a copy of traffic seen on one port and sends that duplicated traffic out another port (to which a network sniffer could be attached). As shown in Figure 4-23, the switch is configured to mirror traffic on port 2 to port 3. This allows a network sniffer to capture the packets that need to be analyzed. Depending on the switch, locally captured traffic could be forwarded to a remote destination for centralized analysis of that traffic.

Image

Image

Figure 4-23 Example: Network Sniffer with Port Mirroring Configured on the Switch

Port Mirroring Configuration

Example 4-4 shows a sample configuration from a Cisco Catalyst switch that captures all the frames coming in on port gig 0/1, and forwards them to port gig 0/3.

Example 4-4 Port Mirroring Configuration

Click here to view code image


SW1(config)# monitor session 1 source interface Gi0/1
SW1(config)# monitor session 1 destination interface Gi0/3


User Authentication

For security purposes, some switches require users to authenticate themselves (that is, provide credentials, such as a username and password, to prove who they are) before gaining access to the rest of the network. A standards-based method of enforcing user authentication is IEEE 802.1X.

With 802.lX enabled, a switch requires a client to authenticate before communicating on the network. After the authentication occurs, a key is generated that is shared between the client and the device to which it attaches (for example, a wireless LAN controller or a Layer 2 switch). The key then encrypts traffic coming from and being sent to the client.

In Figure 4-24, you see the three primary components of an 802.1X network, which are described in the following list.

Image

Figure 4-24 8021X User Authentication

Image

Image Supplicant: The device that wants to gain access to the network.

Image Authenticator: The authenticator forwards the supplicant’s authentication request on to an authentication server. After the authentication server authenticates the supplicant, the authenticator receives a key that is used to communicate securely during a session with the supplicant.

Image Authentication server: The authentication server (for example, a Remote Authentication Dial In User Service [RADIUS] server) checks a supplicant’s credentials. If the credentials are acceptable, the authentication server notifies the authenticator that the supplicant is allowed to communicate on the network. The authentication server also gives the authenticator a key that can be used to securely transmit data during the authenticator’s session with the supplicant.

An even more sophisticated approach to admission control is the Network Admission Control (NAC) feature offered by some authentication servers. Beyond just checking credentials, NAC can check characteristics of the device seeking admission to the network. The client’s operating system (OS) and version of antivirus software are examples of these characteristics.

Management Access and Authentication

To manage a switch, you could use Secure Shell (SSH) or connect directly to the console port of the switch. An unmanaged switch is one that does not support the use of an IP address or a console port to connect to for management purposes. When possible, using a separate network for management of a managed switch is desired. This is referred to as out-of-band (OOB) management when the management traffic is kept on a separate network than the user traffic. To use remote SSH access, SSH must be enabled on the switch and the switch must have an IP address and default gateway configured so it can reply to the SSH requests when the administrator using SSH is not on the same local network as the switch. Example 4-5 shows a sample configuration for IP and management access on a Cisco Catalyst switch.

Example 4-5 Management Access

Click here to view code image


! Move to the logical Layer 3 interface that will
! receive the management IP address for the switch
SW1(config)# interface vlan 1

! Configure an IP address that is available for the
! switch to use
SW1(config-if)# ip address 172.16.55.123 255.255.255.0
SW1(config-if)# exit

! Configure a domain name, required for creating the
! keys used for SSH cryptography
SW1(config)# ip domain-name pearson.com

! Create the public/private key pair SSH can use
SW1(config)# crypto key generate rsa modulus 1024

! Specify the version of SSH to allow
SW1(config)# ip ssh version 2

! Create a user account on the local switch
SW1(config)# username admin privilege 15 secret pears0nR0cks!

! Move to the logical VTY lines used for SSH access
SW1(config)# line vty 0 15

! Allow only SSH on the logical range 16 VTY lines (0 - 15)
SW1(config-line)# transport input ssh

! Require using an account from the local switch to log in
SW1(config-line)# login local
SW1(config-line)# exit

! Set the default gateway the switch can use when communicating
! over an SSH session with an administrator who is on a different
! network than the switch's interface VLAN 1
SW1(config)# ip default-gateway 172.16.55.1

! Move to the console port of the switch
SW1(config)# line console 0

! Require authentication using the local switch before allowing
! access to the switch through the console port
SW1(config-line)# login local


First-Hop Redundancy

Many devices, such as PCs, are configured with a default gateway. The default gateway parameter identifies the IP address of a next-hop router. As a result, if that router were to become unavailable, devices that relied on the default gateway’s IP address would be unable to send traffic off their local subnet.

Fortunately, a variety of technologies are available for providing first-hop redundancy. One such technology is Hot Standby Router Protocol (HSRP), which is a Cisco proprietary protocol. HSRP can run on routers or multilayer switches.

HSRP uses virtual IP and MAC addresses. One router, known as the active router, services requests destined for the virtual IP and MAC addresses. Another router, known as the standby router, can service such requests in the event the active router becomes unavailable. Figure 4-25 illustrates a sample HSRP topology.

Image

Figure 4-25 Sample HSRP Topology

Notice that router R1 is acting as the active router, and router R2 is acting as the standby router. When workstation A sends traffic destined for a remote network, it sends traffic to its default gateway of 172.16.1.3, which is the IP address being serviced by HSRP. Because router R1 is currently the active router, R1 does the work of forwarding the traffic off the local network. However, router R2 notices if router R1 becomes unavailable, because hello messages are no longer received from router R1. At that point, router R2 transitions to an active router role. With default timer settings, the time required to fail over to router R2 is approximately 10 seconds. However, timers can be adjusted such that the failover time is as little as 1 second.


Note

Cisco has another first-hop proprietary redundancy protocol named Gateway Load Balancing Protocol (GLBP). Although GLBP and HSRP are Cisco proprietary solutions, Virtual Router Redundancy Protocol (VRRP) and Common Address Redundancy Protocol (CARP) are open standard options for first-hop redundancy.


Other Switch Features

Although switch features, such as those previously described, vary widely by manufacturer, some switches offer a variety of security features. For example, MAC filtering might be supported, which allows traffic to be permitted or denied based on a device’s MAC address. Other types of traffic filtering might also be supported, based on criteria such as IP address information (for multilayer switches).

For monitoring and troubleshooting purposes, interface diagnostics might be accessible. This diagnostic information might contain information including various error conditions (for example, late collisions or cyclic redundancy check [CRC] errors, which might indicate a duplex mismatch).

Some switches also support quality of service (QoS) settings. QoS can forward traffic based on the traffic’s priority markings. Also, some switches have the ability to perform marking and remarking of traffic priority values.


Note

QoS technologies are covered in more detail in Chapter 9, “Network Optimization.”


Real-World Case Study

Acme Inc. has made some decisions regarding the setup of its LAN. For connections from the client machines to the switches in the wiring closets (IDF), it will use unshielded twisted-pair Category 5 cabling with the switch ports configured as access ports and set to 100 Mbps to match the Fast Ethernet capabilities of the client computers that will be connecting to the switch.

Multiple VLANs will be used. The computers that are being used by Sales will be connected to ports on a switch that are configured as access ports for the specific VLAN for Sales. Computers used by Human Resources will connect to switch ports that are configured as access ports for the Human Resources VLAN. There will be separate IP subnetworks associated with each of the VLANs.

To provide a fault-tolerant default gateway for the clients in each of the VLANs, a first-hop redundancy protocol will be used, such as HSRP, GLBP, or VRRP.

The fiber connections that will go vertically through the building and connect the switches in the IDFs to the MDF in the basement will be running at 1 Gbps each, and multiple fiber cables will be used. Link Aggregation Control Protocol will be used for these vertical connections to make the multiple fiber links work together as part of one logical EtherChannel interface. For the LACP connections between the IDFs and MDF to support multiple VLANs, the LAG will be configured as a trunk using 802.1Q tagging. Routing between the VLANs will be done by multilayer switches that are located near the MDF.

Spanning tree will be enabled on the switches so that in the event of parallel paths between switches, a Layer 2 loop can be prevented.

To support IP-based telephones in the offices, the switches will also provide Power over Ethernet, which can supply power to the IP phones over the Ethernet cables that run between the switch in the IDF and the IP telephones.

If protocol analysis needs to be done, the switches that will be purchased need to support port mirroring so that frames from one port can be captured and forwarded to an alternate port for analysis.

To authenticate devices that are connecting to the switch ports, 802.1X can be used. To authenticate administrators who are connecting to switches for management, authentication can be forced at the logical vty lines. SSH will be enabled and enforced because it is a secure management protocol. The switches will be given their own IP address, in addition to a default gateway to use so that they can be remotely managed. Local user accounts will be created on the switches so that local authentication can be implemented as each administrator connects either to the console or via SSH.

Summary

The main topics covered in this chapter are the following:

Image The origins of Ethernet, which included a discussion of Ethernet’s CSMA/CD features.

Image A variety of Ethernet standards were contrasted in terms of media type, network bandwidth, and distance limitation.

Image Various features that might be available on modern Ethernet switches. These features include VLANs, trunking, STP, link aggregation, PoE, port monitoring, user authentication, and first-hop redundancy.

Exam Preparation Tasks

Review All the Key Topics

Review the most important topics from inside the chapter, noted with the Key Topic icon in the outer margin of the page. Table 4-5 lists these key topics and the page numbers where each is found.

Image

Table 4-5 Key Topics for Chapter 4

Complete Tables and Lists from Memory

Print a copy of Appendix D, “Memory Tables” (found on the DVD), or at least the section for this chapter, and complete the tables and lists from memory. Appendix E, “Memory Table Answer Key,” also on the DVD, includes the completed tables and lists so you can check your work.

Define Key Terms

Define the following key terms from this chapter, and check your answers in the Glossary:

Ethernet

collision

carrier sense multiple access collision detect (CSMA/CD)

full-duplex

half-duplex

virtual LAN (VLAN)

trunk

Spanning Tree Protocol (STP)

root port

designated port

nondesignated port

link aggregation

Power over Ethernet (PoE)

supplicant

authenticator

authentication server

Review Questions

The answers to these review questions are in Appendix A, “Answers to Review Questions.”

1. Identify the distance limitation of a 10BASE5 Ethernet network.

a. 100 m

b. 185 m

c. 500 m

d. 2 km

2. If two devices simultaneously transmit data on an Ethernet network and a collision occurs, what does each station do in an attempt to resend the data and avoid another collision?

a. Each device compares the other device’s priority value (determined by IP address) with its own, and the device with the highest priority value transmits first.

b. Each device waits for a clear to send (CTS) signal from the switch.

c. Each device randomly picks a priority value, and the device with the highest value transmits first.

d. Each device sets a random back off timer, and the device will attempt retransmission after the timer expires.

3. What kind of media is used by 100GBASE-SR10 Ethernet?

a. UTP

b. MMF

c. STP

d. SMF

4. Which of the following statements are true regarding VLANs? (Choose two.)

a. A VLAN has a single broadcast domain.

b. For traffic to pass between two VLANs, that traffic must be routed.

c. Because of a switch’s MAC address table, traffic does not need to be routed to pass between two VLANs.

d. A VLAN has a single collision domain.

5. What name is given to a VLAN on an IEEE 802.1Q trunk whose frames are not tagged?

a. Native VLAN

b. Default VLAN

c. Management VLAN

d. VLAN 0

6. In a topology running STP, every network segment has a single ______________ port, which is the port on that segment that is closest to the root bridge, in terms of cost.

a. Root

b. Designated

c. Nondesignated

d. Nonroot

7. What is the IEEE standard for link aggregation?

a. 802.1Q

b. 802.3ad

c. 802.1d

d. 802.3af

8. What is the maximum amount of power a switch is allowed to provide per port according to the IEEE 802.3af standard?

a. 7.7 W

b. 15.4 W

c. 26.4 W

d. 32.4 W

9. What switch feature allows you to connect a network sniffer to a switch port and tells the switch to send a copy of frames seen on one port out the port to which your network sniffer is connected?

a. Port interception

b. Port duplexing

c. Port mirroring

d. Port redirect

10. Which IEEE 802.1X component checks the credentials of a device wanting to gain access to the network?

a. Supplicant

b. Authentication server

c. Access point

d. Authenticator