Virtual Chassis Fabric - Juniper QFX5100 Series (2015)

Juniper QFX5100 Series (2015)

Chapter 5. Virtual Chassis Fabric

A growing trend in the networking industry is to move away from traditional architectures such as Layer 2–only access switches or putting both Layer 2 and Layer 3 in the distribution layer. The next logical step for ubiquitous Layer 2 and Layer 3 access with ease of management is to create an Ethernet fabric.

Virtual Chassis Fabric (VCF) is a plug-and-play Ethernet fabric that offers a single point of management and many, many features. Think of a 3-stage Clos topology with the look and feel like a single logical switch; this is another good way to visualize VCF.

Overview

It’s a common myth that a high-performance, feature-rich network is difficult to manage. This usually stems from the fact that there are many factors that the administrator must worry about:

§ Performance

§ Scale

§ Latency

§ High availability

§ Routing protocols

§ Equal cost multipathing

§ Layer 2 and Layer 3 access

§ Lossless Ethernet

§ Software upgrades

§ Management

Such a laundry list of tasks and responsibilities is often daunting to a small or medium-sized company with a minimal IT staff, and indeed, without assistance, would be difficult to administer. VCF was created specifically to solve this problem. It provides an architecture by which a single person can manage the entire network as if it were a single device, without sacrificing the performance, high availability, or other features.

It’s easy to assume from the name that VCF has a lot of roots in the original Virtual Chassis technology; if you made such an assumption, you would be correct. VCF expands on the original Virtual Chassis technology and introduces new topologies, features, and performance.

Architecture

One of the most compelling benefits of VCF is the ability to create 3-stage Clos topologies (see Figure 5-1). VCF is the encapsulation of all of the switches in the 3-stage Clos topology.

Figure 5-1. VCF architecture

There are basically two high-level roles in the VCF architecture: spine and leaf. The spine and leaf roles are used to create the 3-stage Clos topology and described here:

Spine

The spine switches are at the heart of the topology and are used to interconnect all of the other leaf switches. Typically, the spine switches are higher-speed devices than leaf switches; this is to maintain low latency and high performance when looking at the entire network end-to-end.

Leaf

The leaf switches are the ingress and egress nodes of the 3-stage Clos fabric. Most of the end points in the data center will connect through the leaf switches. The leaf switches are feature rich, can support servers, storage, and appliances, and can peer with other networking equipment.

The leaf switch role can support any of the Juniper QFX5100 switches to support various port densities and speeds; if you need a large deployment of 1GbE interfaces, you can use an EX4300 device as the leaf switch. VCF also offers investment protection; you can use existing switches such as those in the QFX3500 and QFX3600 families.

VCF is a flexible platform that allows you to incrementally change and increase the scale of the network. For example, you can start with two spine and two leaf switches today and then upgrade to four spine and 28 leaf switches tomorrow. Adding switches into the fabric is made very easy thanks to the plug-and-play nature of the architecture. You can add new leaf switches into the topology that are then automatically discovered and brought online.

Traffic engineering

VCF uses the Intermediate System to Intermediate System (IS-IS) routing protocol internally with some modified type length values (TLVs); this affords VCF a full end-to-end view of the topology, bandwidth, and network. As Juniper QFX5100 switches are combined together to create a VCF, the links connecting the switches automatically form logical links called Smart Trunks. As traffic flows across the VCF, the flows can be split up at each intersection in the fabric in an equal or unequal manner depending on the bandwidth of the links. For example, if all of the links in the VCF were the same speed and same quantity, all next-hops would be considered equal. However, during failure conditions, some links could fail and some switches could have more bandwidth than other switches. Smart Trunks allow for Unequal-Cost Multipathing (UCMP) in the event that some paths have more bandwidth than others. As a result, traffic is never dropped in a failure scenario.

Adaptive Load Balancing: Of Mice and... Elephants?

OK, it might not be Steinbeck-esque, but in the data center, there’s a story about mice and elephants. The idea is that there are long-lived flows in the data center network that consume a lot of bandwidth; these types of flows are referred to as elephant flows. Some examples of elephant flows are backups and copying data. Other types of flows are short-lived and consume little bandwidth; these are referred to as mice flows. Examples of mice flows include DNS and NTP.

The problem is that each flow within a network switch is subject to Equal-Cost Multipath (ECMP), and is pinned to a particular next-hop or uplink interface. If a couple of elephant flows get pinned to the same interface and consume all of the bandwidth, they will begin to overrun other, smaller mice flows on the same egress uplink. Due to the nature of standard flow hashing, mice flows have a tendency to be trampled by elephant flows, which has a negative impact on application performance.

VCF has a very unique solution to the elephant and mice problem. If you take a closer look at TCP flows, you will notice something called flowlets. These are the blocks of data being transmitted between the TCP acknowledgement from the receiver. Depending on the bandwidth, TCP window size, and other variables, flowlets exist in different sizes and frequencies, as illustrated in Figure 5-2.

TCP flowlets

Figure 5-2. TCP flowlets

One method to solve the elephant-and-mice problem is to hash the flowlets to different next-hops. For example, in Figure 5-2, if Flow 1 and Flow 3 were elephant flows, each of the flowlets could be hashed to a different uplink, as opposed to the entire flow being stuck on a single uplink. VCF uses the flowlet hashing functionality to solve the elephant-and-mice problem; this feature is called Adaptive Load Balancing (ALB).

ALB is enabled by the use of a hash-bucket table (see Figure 5-3). The hash-bucket table has hundreds of thousands of entries; this is large enough to avoid any elephant flowlet collisions. As each flowlet egresses an interface, the hash-bucket table is updated with a timestamp and the egress link. For each packet processed, the time elapsed since the last packet received is compared with an inactivity timer threshold. If the last activity timer exceeds the threshold, the hash-bucket table and packet is assigned a new egress link. The eligibility of a new egress link indicates a new flowlet.

Flowlet hashing

Figure 5-3. Flowlet hashing

Another important factor when selecting a new egress interface for a new flowlet is the link quality. The link quality is defined as a moving average of the link’s load and queue depth. The link with the least utilization and congestion is selected as the egress interface for new flowlets.

ALB is disabled by default and must be turned on in the Junos configuration as shown below:

[edit]

root# set fabric-load-balance flowlet

root# commit

To ensure in-order packet delivery, the inactivity internal should be larger than the largest latency skew amount all the paths in the VCF from any node to any other node. The default inactivity timer is 16μs; the timer can be changed from 16μs to 32ms. The basis premise is that you do not want your inactivity-timer set too high, otherwise it won’t be able to detect the flowlets. The best practice is to leave it set at the default value of 16μs.

To change the inactivity interval, use the following configuration:

[edit]

root# set fabric-load-balance flowlet inactivity-interval <value>

root# commit

The value can be specified in simple terms such as “20us” or “30ms.” There’s no need to convert the units into nanoseconds; just use the simple “us” and “ms” postfixes. To enable ALB to use any available next-hop based upon usage for ECMP, you may enable per-packet mode in ALB with the following configuration:

[edit]

root# set fabric-load-balance per-packet

root# commit

When per-packet mode is enabled, the VCF forwarding algorithm dynamically monitors all paths in VCF and forwards packets to destination switches using the best available path at that very moment. Flows are reordered when using per-packet mode, so some performance impact could be seen.

Requirements

There are a few key requirements that must be satisfied to create a VCF. Table 5-1 contains a listing of which switches can be used as a spine and leaf switch.

Switch

Spine

Leaf

QFX5100-24Q

Yes

Yes

QFX5100-96S

Yes

Yes

QFX5100-48S

Yes

Yes

QFX5100-48T

No

Yes

QFX3500

No

Yes

QFX3600

No

Yes

EX4300

No

Yes

Table 5-1. VCF switch requirements

VCF only supports 3-stage Clos topologies; other topologies might work but are not certified or supported by Juniper.

Software

Not all Junos software is compatible with VCF. You must use at least Junos 13.2X51-D20 or newer on all switches in the VCF.

Spine

A spine switch must be a QFX5100 switch; there are no exceptions. The amount of processing required in the spine requires additional control plane processing. The Juniper QFX5100 family has an updated control plane and makes a perfect fit for the spine in a VCF. Spine switches must also have a direct connection to each leaf switch in the topology. You cannot use intermediate switches or leave any leaf unconnected. The spine switches always assume the roles of the routing engine or the backup routing engine.

As of Junos 13.1X53-D10, VCF only supports up to four spine switches.

Leaf

Leaf switches are optimally Juniper QFX5100 switches, but there is also support for QFX3500, QFX3600, and EX4300 switches. Each leaf must have a direct connection to each spine in the topology. The leaf switches always assume the role of a line card.

As of Junos 14.1X53-D10, VCF supports up to 28 leaf switches.

Virtual Chassis modes

VCF supports two modes: fabric mode and mixed mode. By default, the switch ships in fabric mode. The mode is set on a per-switch basis. All switches in the VCF must be set to the same mode.

WARNING

It’s recommended that you change the mode of a switch before connecting it into a VCF. Connecting a new switch into the fabric and then changing the mode can cause temporary disruptions to the fabric.

Fabric mode

A VCF in fabric mode supports only QFX5100 devices. Fabric mode is the most recommended mode because it represents the latest technology, features, and scale. When the VCF is in fabric mode, it supports the full scale and features of the Juniper QFX5100 series.

Mixed mode

If you want to introduce native 1GbE connectivity with the EX4300 family or use existing QFX3500 and QFX3600 switches, the VCF must be placed into mixed mode. One of the drawbacks to using mixed mode is that VCF will operate in a “lowest common dominator” mode in terms of scale and features. For example, if you’re using an EX4300 switch as a leaf in VCF, you would cause the entire fabric to operate at the reduced scale and feature level of the EX4300 device, as opposed to that of the Juniper QFX5100 device. The same is true for QFX3500 and QFX3600 switches.

Provisioning configurations

Provisioning a VCF involves how the fabric is configured on the command line as well as how new switches are added into the fabric. There are three modes associated with provisioning a VCF. Table 5-2 compares them at a high level.

Attribute

Auto-provisioned

Preprovisioned

Nonprovisioned

Configure serial number

Yes

Yes

Yes

Configure role

Yes

Yes

Yes

Configure priority

No

No

Yes

Adding new leaf

Plug-and-play

Configure role and serial number

Configure priority and serial number

Virtual Chassis ports

Automatic

Automatic

Manual

Table 5-2. Comparing Virtual Chassis Fabric provisioning modes

There are benefits and drawbacks to each provisioning mode. Use Table 5-2 and the sections that follow to understand each mode and make the best decision for your network. In general, it’s recommended to use the auto-provisioned mode because it’s a plug-and-play fabric.

Auto-provisioned mode

The easiest and recommended method is to use the auto-provisioned mode in VCF. There is minimal configuring required on the command line, and adding new switches into the fabric is as simple as connecting the cables and powering on the device; it’s truly a plug-and-play architecture.

The only manual configuration required for auto-provisioned mode is to define the number of spine switches, set the role to routing-engine, and the serial number for each spine. For example:

[edit]

root# set virtual-chassis member 0 role routing-engine serial-number TB3714070330

[edit]

root# set virtual-chassis member 1 role routing-engine serial-number TB3714070064

In the preceding example, the VCF has two spine switches. We manually configured them as routing engines and set each serial number. Virtual Chassis ports are automatically discovered and configured in auto-provisioned mode.

Preprovisioned mode

The second most common method is the preprovisioned mode. The difference is that you must manually configure each switch in the topology, assign a role, and set the serial number. You cannot add new switches into the VCF without configuration. In environments with higher security requirements, a preprovisioned VCF would prevent unauthorized switches from being added into the fabric. The configuration is identical to the auto-provisioned mode. Virtual Chassis ports are automatically discovered and configured in preprovisioned mode.

Nonprovisioned mode

The nonprovisioned mode is the default configuration of each switch from the factory. The role is no longer required to be defined in this role; instead, a mastership election process is used to determine the role of each switch. The mastership election process is controlled through setting a priority on a per-switch basis. You define Virtual Chassis ports manually. They are not automatically discovered.

The nonprovisioned mode isn’t recommended in general, and is only reserved for environments that require a specific mastership election process during a failure event. Adding new switches to the fabric requires serial number and priority configuration.

Components

At a high level, there are four components that are used by the switches to build a VCF: routing engine, line card, virtual management Ethernet interface, and Virtual Chassis ports.

Master routing engine

The first role a switch can have in a VCF is a routing engine. Only spine switches can become a routing engine. The leaf switches can only be a line card. The role of routing engine acts as the control plane for the entire VCF. A spine switch operating as a routing engine is responsible for the following:

§ Operating as the control plane for the entire VCF

§ Operating VCF control protocols for auto-discovery, topology management, and internal routing and forwarding protocols

§ Taking ownership of the virtual management Ethernet interface for the VCF

Other spine switches must operate in the backup or line-card role. The next spine switch in succession to become the next master routing engine will operate in the backup role. All other spine switches will operate as line cards. In summary, only the first spine switch can operate as a master routing engine; the second spine switch operates as the backup routing engine; and the third and fourth spine switches operate as a line card.

Backup routing engine

The backup routing engine is similar to the master routing engine, except that its only job is to become the master routing engine if there’s a failure with the current master routing engine. Part of this responsibility requires that the master and backup routing engines must be perfectly synchronized in terms of kernel and control plane state. The following protocols are used between the master and backup routing engines to keep synchronized:

§ Graceful Routing Engine Switch Over (GRES)

§ Nonstop Routing (NSR)

§ Nonstop Bridging (NSB)

Keeping the backup routing engine synchronized with the master routing engine allows VCF to experience a hitless transition between the master and backup routing engines without traffic loss.

Line card

All other switches in the VCF that aren’t a master or backup routing engine are a line card. By default, all leaf switches are a line card. If there are more than two spines, all other spines are also a line card; only the first two spines can be a routing engine.

The line card role acts simply as a line card would in a real chassis. There are minimal control plane functions on the line card to process the Virtual Chassis management and provisioning functions; otherwise, the switch simply exists to forward and route traffic as fast as possible.

Virtual Management Ethernet interface

Each switch in a VCF has a Management Ethernet (vme) port. These ports are used to manage the switch over IP. They are directly controlled by the routing engine and are out-of-band from the revenue traffic. Figure 5-4 shows an example of the Virtual Management Ethernet interface, labeled C0 and C1.

Virtual Management Ethernet interface in Virtual Chassis fabric

Figure 5-4. Virtual Management Ethernet interface in Virtual Chassis fabric

However, in a VCF, only one of these vme ports can be active at any given time. The switch that currently holds the master routing engine role is responsible for the vme management port. Although a VCF could have up to 32 switches, only a single switch will be used for out-of-band management through the vme port.

Virtual Chassis ports

The Virtual Chassis ports (VCP) are the interfaces that directly connect the switches together. VCP interfaces are standard 10GbE and 40GbE interfaces on the switch and do not require special cables. Simply use the existing QSFP and SFP+ interfaces to interconnect the switches together, as shown in Figure 5-5.

VCP interfaces in VCF

Figure 5-5. VCP interfaces in VCF

After an interface has been configured as a VCP interface, it’s no longer eligible to be used as a standard revenue port. All interswitch traffic will now use VCP interfaces.

Implementation

Configuring VCF is straightforward and easy. Let’s take a look at all three provisioning modes to get a better understanding of the configuration differences. We will also take a look at how to add and remove spine and leaf switches. Each provisioning mode is a little different in the configuration and process of expanding the fabric.

Before configuring the switches, there are a few steps that you are required to carry out before configuring the VCF.

Software Version

Ensure that all switches have the same version of Junos installed. Use Junos 13.2X51D-20 or newer.

Disconnect All Cables

Before you begin to configure VCF, be sure to disconnect all cables from the switches. This is because if you want to use the plug-and-play feature of the auto-provisioned mode, you want to explicitly control the creation of the spine switches, then simply add other switches. For preprovisioned and nonprovisioned modes, you can cable-up the switches.

Identify Serial Numbers

Identify the serial numbers for each switch. For auto-provisioned mode, you only need the serial numbers for the spine switches. For preprovisioned and nonprovisioned modes, you will need all of the spine and leaf switch serial numbers.

Check for Link Layer Discovery Protocol (LLDP)

LLDP should be turned on by factory default, but always check to ensure that it’s enabled. Use the command set protocols lldp interface all to enable LLDP. VCF uses LLDP to enable the plug-and-play functionality in auto-provisioned mode.

After you have upgraded the software, disconnected all cables, and identified all of the serial numbers, you can begin to build the VCF.

Configuring the Virtual Management Ethernet interface

Now, let’s configure a management IP address for this switch:

{master:0}[edit]

root# set interfaces vme.0 family inet address 10.92.82.4/24

The next step is to set a root password for the switch. It will prompt you to enter a password, and then again for verification:

{master:0}[edit]

root# set system root-authentication plain-text-password

New password:

Retype new password:

The next step is to enable Secure Shell (SSH) so that we can log in and copy files to and from the switch:

{master:0}[edit]

root# set system services ssh root-login allow

Now that you have set a management IP address and root password, you need to commit the changes to activate them:

{master:0}[edit]

root# commit and-quit

configuration check succeeds

commit complete

Exiting configuration mode

root>

The switch should now be reachable on the C0 management interface on the rear of the switch. Let’s ping it to double-check:

epitaph:~ dhanks$ ping 10.92.82.4

PING 10.92.82.4 (10.92.82.4): 56 data bytes

64 bytes from 10.92.82.4: icmp_seq=0 ttl=55 time=21.695 ms

64 bytes from 10.92.82.4: icmp_seq=1 ttl=55 time=20.222 ms

^C

--- 10.92.82.4 ping statistics ---

2 packets transmitted, 2 packets received, 0.0% packet loss

round-trip min/avg/max/stddev = 20.222/20.959/21.695/0.737 ms

epitaph:~ dhanks$

Everything looks great. The switch is now upgraded and able to be managed via IP instead of the serial console.

Auto-provisioned

We’ll spend more time going over the auto-provisioned mode in more detail because it’s the most popular and recommended provisioning mode. The auto-provisioned mode only requires that you define the spine switches and the serial numbers. Figure 5-6 presents a simple topology and what the configuration would be.

A simple VCF topology

Figure 5-6. A simple VCF topology

The topology in Figure 5-6 has two spines and four leaf switches. In this example, both spine switches need to be configured. The spine switch serial numbers have been identified and are shown in Table 5-3.

Switch

Serial number

QFX5100-24Q-01

TB3714070330

QFX5100-24Q-02

TB3714070064

Table 5-3. QFX5100-24Q spine serial numbers

Installing the first spine switch

The first step is to ensure that the spine switches are in fabric mode. Use the following command on both QFX5100-24Q switches:

root> request virtual-chassis mode fabric reboot

The switches will reboot to fabric mode.

The next step is to begin configuring VCF on the first spine. Put QFX5100-24Q-01 into auto-provisioned mode. We’ll also support upgrading the software of other switches connected into the fabric with the command auto-sw-upgrade knob.

NOTE

Don’t worry about the second spine switch, QFX5100-24Q-02, for the moment. We’ll focus on QFX5100-24Q-01 and move on to the leaf switches. Adding the final spine switch will be the last step when bringing up VCF.

Starting on QFX5100-24Q-01, let’s begin to configure VCF:

[edit]

root# set virtual-chassis auto-provisioned

[edit]

root# set virtual-chassis auto-sw-upgrade

The next step is to configure the role and serial numbers of all of the spine switches (use the data presented earlier in Table 5-3):

[edit]

root# set virtual-chassis member 0 role routing-engine serial-number TB3714070330

[edit]

root# set virtual-chassis member 1 role routing-engine serial-number TB3714070064

Verify the configuration before committing it:

[edit]

root# show virtual-chassis

auto-provisioned;

member 0 {

role routing-engine;

serial-number TB3714070330;

}

member 1 {

role routing-engine;

serial-number TB3714070064;

}

Now, you can commit the configuration:

[edit]

root# commit and-quit

configuration check succeeds

commit complete

Exiting configuration mode

The Juniper QFX5100-24Q is now in VCF mode; we can verify this by using the show virtual-chassis command:

{master:0}

root> show virtual-chassis

Fabric ID: 5ba4.174a.04ca

Fabric Mode: Enabled

Mstr Mixed Route Neighbor List

Member ID Status Serial No Model prio Role Mode Mode ID Interface

0 (FPC 0) Prsnt TB3714070330 qfx5100-24q-2p 128 Master* N F 0 vcp-255/0/0

Installing the first leaf switch

The next step is to begin installing the leaves, which is a very simple process. Log in to the first QFX5100-48S-01 and reset the switch to a factory default state:

root> request system zeroize

warning: System will be rebooted and may not boot without configuration

Erase all data, including configuration and log files? [yes,no] (no) yes

warning: ipsec-key-management subsystem not running - not needed by configuration.

root> Terminated

NOTE

If the leaf switches already have Junos 13.2X51-D20 installed and are in a factory default state, you can skip the request system zeroize step. You can simply connect the leaf switch to the spine switch.

After the switch reboots, simply connect a 40G cable from the Juniper QFX5100-24Q-01 to QFX5100-48S-01, as illustrated in Figure 5-7.

Connecting QFX5100-24Q-01 to QFX5100-48S-01

Figure 5-7. Connecting QFX5100-24Q-01 to QFX5100-48S-01

When the cable is connected, the master QFX5100-24Q-01 will automatically add the new QFX5100-48S-01 into the VCF.

Install remaining leaf switches

Repeat this step for each QFX5100-48S in the VCF, as shown in Figure 5-8.

Connecting the other leaf switches

Figure 5-8. Connecting the other leaf switches

When all of the Juniper QFX5100-48S leaves have been reset to factory default and connected, the Juniper QFX5100-24Q-01 will bring all of the switches into the VCF. You can verify this by using the show virtual-chassis command:

{master:0}

root> show virtual-chassis

Auto-provisioned Virtual Chassis Fabric

Fabric ID: 742a.6f8b.6de6

Fabric Mode: Enabled

Mstr Mixed Route Neighbor List

Member ID Status Serial No Model prio Role Mode Mode ID Interface

0 (FPC 0) Prsnt TB3714070330 qfx5100-24q-2p 129 Master* N F 4 vcp-255/0/0

3 vcp-255/0/1

2 vcp-255/0/3

5 vcp-255/0/4

2 (FPC 2) Prsnt TA3713480228 qfx5100-48s-6q 0 Linecard N F 0 vcp-255/0/48

3 (FPC 3) Prsnt TA3713480106 qfx5100-48s-6q 0 Linecard N F 0 vcp-255/0/48

4 (FPC 4) Prsnt TA3713470455 qfx5100-48s-6q 0 Linecard N F 0 vcp-255/0/48

5 (FPC 5) Prsnt TA3713480037 qfx5100-48s-6q 0 Linecard N F 0 vcp-255/0/48

The newly added switches are displayed in italics in the preceding output; for reference they’re Member ID 2, 3, 4, and 5.

Install the last spine

The last step is to add the second QFX5100-24Q-02 spine into the VCF. Repeat the same steps and reset the switch using the zeroize command on the second QFX5100-24Q-02, and then after the switch reboots, connect the remaining cables into a full mesh, as depicted in Figure 5-9.

Adding the final spine switch, QFX5100-24Q-02

Figure 5-9. Adding the final spine switch, QFX5100-24Q-02

Wait a couple of minutes and then check the status of the VCF again; you should see the missing member 1 as Prsnt with a role of Backup:

dhanks@> show virtual-chassis

Auto-provisioned Virtual Chassis Fabric

Fabric ID: 742a.6f8b.6de6

Fabric Mode: Enabled

Mstr Mixed Route Neighbor List

Member ID Status Serial No Model prio Role Mode Mode ID Interface

0 (FPC 0) Prsnt TB3714070330 qfx5100-24q-2p 129 Master* N F 4 vcp-255/0/0

3 vcp-255/0/1

2 vcp-255/0/3

5 vcp-255/0/4

1 (FPC 1) Prsnt TB3714070064 qfx5100-24q-2p 129 Backup N F 4 vcp-255/0/0

3 vcp-255/0/1

2 vcp-255/0/3

5 vcp-255/0/4

2 (FPC 2) Prsnt TA3713480228 qfx5100-48s-6q 0 Linecard N F 0 vcp-255/0/48

1 vcp-255/0/49

3 (FPC 3) Prsnt TA3713480106 qfx5100-48s-6q 0 Linecard N F 0 vcp-255/0/48

1 vcp-255/0/49

4 (FPC 4) Prsnt TA3713470455 qfx5100-48s-6q 0 Linecard N F 0 vcp-255/0/48

1 vcp-255/0/49

5 (FPC 5) Prsnt TA3713480037 qfx5100-48s-6q 0 Linecard N F 0 vcp-255/0/48

1 vcp-255/0/49

Use the show interface command to verify that the new Virtual Chassis Fabric management interface is up:

{master:0}

root> show interfaces terse vme

Interface Admin Link Proto Local Remote

vme up up

vme.0 up up inet 10.92.82.4/24

You probably recognized (astutely, I should mention) that this is the same vme interface that we originally configured on the Juniper QFX5100-24Q-01 when it was in standalone mode. The vme configuration has persisted through when placing the device into VCF. Because the Juniper QFX5100-24Q-01 is the master routing engine, it also owns the vme interface. We can also check the reachability from our laptop:

epitaph:~ dhanks$ ping 10.92.82.4

PING 10.92.82.4 (10.92.82.4): 56 data bytes

64 bytes from 10.92.82.4: icmp_seq=0 ttl=55 time=21.695 ms

64 bytes from 10.92.82.4: icmp_seq=1 ttl=55 time=20.222 ms

^C

--- 10.92.82.4 ping statistics ---

2 packets transmitted, 2 packets received, 0.0% packet loss

round-trip min/avg/max/stddev = 20.222/20.959/21.695/0.737 ms

epitaph:~ dhanks$

It appears that we can reach the VCF by using the built-in management port. We’re now ready for the next step.

Configure high availability

To ensure that the VCF recovers quickly from failures, there are three key features that we need to enable:

§ GRES: Synchronize kernel state between the master and backup routing engines

§ NSR: Synchronize routing protocol state between the master and backup routing engines

§ NSB: Synchronize Layer 2 protocol state between the master and backup routing engines

The first step is to configure GRES:

{master:0}[edit]

dhanks@VCF# set chassis redundancy graceful-switchover

{master:0}[edit]

dhanks@VCF# set system commit synchronize

Next, configure NSR and NSB:

{master:0}[edit]

dhanks@VCF# set routing-options nonstop-routing

{master:0}[edit]

dhanks@VCF# set protocols layer2-control nonstop-bridging

{master:0}[edit]

dhanks@VCF# commit and-quit

configuration check succeeds

commit complete

Exiting configuration mode

Now, verify that the master routing engine is sending data to the backup routing engine through the GRES protocol:

{master:0}

dhanks@VCF> show task replication

Stateful Replication: Enabled

RE mode: Master

The next step is to verify that NSR and NSB are synchronizing state. To do this, you need to log in to the backup routing engine by using the request session command:

{master:0}

dhanks@VCF> request session member 1

--- JUNOS 13.2-X51D20

dhanks@VCF:BK:1% clear

dhanks@VCF:BK:1% cli

warning: This chassis is operating in a non-master role as part of a virtual-

chassis fabric (VCF) system.

warning: Use of interactive commands should be limited to debugging and VC Port

operations.

warning: Full CLI access is provided by the Virtual Chassis Fabric Master (VCF-M)

chassis.

warning: The VCF-M can be identified through the show fabric status command

executed at this console.

warning: Please logout and log into the VCF-M to use CLI.

Now that you’ve logged in to the backup routing engine, verify NSR and NSB:

{backup:1}

dhanks@VCF> show system switchover

fpc1:

--------------------------------------------------------------------------

Graceful switchover: On

Configuration database: Ready

Kernel database: Ready

Peer state: Steady State

{backup:1}

dhanks@VCF> show l2cpd task replication

Stateful Replication: Enabled

RE mode: Backup

Everything looks great. At this point, VCF is configured and ready to use.

Preprovisioned

Configuring the preprovisioned VCF is very similar to the auto-provisioned method. Begin by configuring the following items just as you would for auto-provisioned mode:

§ Ensure that switches are running Junos 13.2X51-D20 or higher

§ Identify all of the serial numbers for both spine and leaf switches

§ Disconnect all cables

§ Configure the vme interface on the first spine switch and check connectivity

The next step is to begin configuring VCF in preprovisioned mode.

Starting on QFX5100-24Q-01, begin to configure VCF:

[edit]

root# set virtual-chassis preprovisioned

[edit]

root# set virtual-chassis auto-sw-upgrade

Configure the role and serial numbers of all of the spine switches (use the data provided in Table 5-3):

[edit]

root# set virtual-chassis member 0 role routing-engine serial-number TB3714070330

[edit]

root# set virtual-chassis member 1 role routing-engine serial-number TB3714070064

Configure the role and serial numbers of all of the leaf switches from Figure 5-6:

[edit]

root# set virtual-chassis member 2 role routing-engine serial-number TB3714070228

[edit]

root# set virtual-chassis member 3 role routing-engine serial-number TB3714070106

[edit]

root# set virtual-chassis member 4 role routing-engine serial-number TB3714070455

[edit]

root# set virtual-chassis member 5 role routing-engine serial-number TB3714070037

The next step is to connect the rest of the switches in the topology and turn them on.

Wait a couple of minutes and check the status of the VCF again; you should see the Virtual Chassis up and running:

dhanks@> show virtual-chassis

Pre-provisioned Virtual Chassis Fabric

Fabric ID: 742a.6f8b.6de6

Fabric Mode: Enabled

Mstr Mixed Route Neighbor List

Member ID Status Serial No Model prio Role Mode Mode ID Interface

0 (FPC 0) Prsnt TB3714070330 qfx5100-24q-2p 129 Master* N F 4 vcp-255/0/0

3 vcp-255/0/1

2 vcp-255/0/3

5 vcp-255/0/4

1 (FPC 1) Prsnt TB3714070064 qfx5100-24q-2p 129 Backup N F 4 vcp-255/0/0

3 vcp-255/0/1

2 vcp-255/0/3

5 vcp-255/0/4

2 (FPC 2) Prsnt TA3713480228 qfx5100-48s-6q 0 Linecard N F 0 vcp-255/0/48

1 vcp-255/0/49

3 (FPC 3) Prsnt TA3713480106 qfx5100-48s-6q 0 Linecard N F 0 vcp-255/0/48

1 vcp-255/0/49

4 (FPC 4) Prsnt TA3713470455 qfx5100-48s-6q 0 Linecard N F 0 vcp-255/0/48

1 vcp-255/0/49

5 (FPC 5) Prsnt TA3713480037 qfx5100-48s-6q 0 Linecard N F 0 vcp-255/0/48

1 vcp-255/0/49

From this point forward configure the high availability just as you did in the auto-provisioned mode. Ensure that the high-availability protocols are working and the routing engines are synchronized. From this point forward, you’re good to go.

Nonprovisioned

Configuring the nonprovisioned VCF is very similar to the preprovisioned method. You begin by configuring the following items just as you would for preprovisioned mode:

§ Ensure that switches are running Junos 13.2X51-D20 or higher

§ Disconnect all cables

§ Identify all serial numbers

§ Configure the vme interface on the first spine switch and check connectivity

The next step is to begin configuring VCF in nonprovisioned mode.

Starting on QFX5100-24Q-01, begin to configure VCF:

[edit]

root# set virtual-chassis auto-sw-upgrade

Now, configure the mastership priority, role, and serial numbers of all of the spine switches (use the data provided in Table 5-3):

[edit]

root# set virtual-chassis member 0 role routing-engine serial-number TB3714070330

[edit]

root# set virtual-chassis member 0 mastership-priority 255

[edit]

root# set virtual-chassis member 1 role routing-engine serial-number TB3714070064

[edit]

root# set virtual-chassis member 1 mastership-priority 254

The next step is to configure the mastership priority, role, and serial numbers of all of the leaf switches from Figure 5-6:

[edit]

root# set virtual-chassis member 2 role routing-engine serial-number TB3714070228

[edit]

root# set virtual-chassis member 2 mastership-priority 0

[edit]

root# set virtual-chassis member 3 role routing-engine serial-number TB3714070106

[edit]

root# set virtual-chassis member 3 mastership-priority 0

[edit]

root# set virtual-chassis member 4 role routing-engine serial-number TB3714070455

[edit]

root# set virtual-chassis member 4 mastership-priority 0

[edit]

root# set virtual-chassis member 5 role routing-engine serial-number TB3714070037

[edit]

root# set virtual-chassis member 5 mastership-priority 0

The mastership priority ranges from 0 to 255. The higher the mastership priority, the more priority it has to become the master routing engine.

Now, connect the rest of the switches in the topology and turn them on.

Wait a couple of minutes and check the status of the VCF again; you should see the Virtual Chassis up and running:

dhanks@> show virtual-chassis

Pre-provisioned Virtual Chassis Fabric

Fabric ID: 742a.6f8b.6de6

Fabric Mode: Enabled

Mstr Mixed Route Neighbor List

Member ID Status Serial No Model prio Role Mode Mode ID Interface

0 (FPC 0) Prsnt TB3714070330 qfx5100-24q-2p 129 Master* N F 4 vcp-255/0/0

3 vcp-255/0/1

2 vcp-255/0/3

5 vcp-255/0/4

1 (FPC 1) Prsnt TB3714070064 qfx5100-24q-2p 129 Backup N F 4 vcp-255/0/0

3 vcp-255/0/1

2 vcp-255/0/3

5 vcp-255/0/4

2 (FPC 2) Prsnt TA3713480228 qfx5100-48s-6q 0 Linecard N F 0 vcp-255/0/48

1 vcp-255/0/49

3 (FPC 3) Prsnt TA3713480106 qfx5100-48s-6q 0 Linecard N F 0 vcp-255/0/48

1 vcp-255/0/49

4 (FPC 4) Prsnt TA3713470455 qfx5100-48s-6q 0 Linecard N F 0 vcp-255/0/48

1 vcp-255/0/49

5 (FPC 5) Prsnt TA3713480037 qfx5100-48s-6q 0 Linecard N F 0 vcp-255/0/48

1 vcp-255/0/49

From this point forward, configure the high availability just as we did in the auto-provisioned mode. Ensure that the high-availability protocols are working and the routing engines are synchronized. From this point forward, you’re good to go.

Using Virtual Chassis Fabric

Now that VCF is configured and ready to go, let’s take a look at some of the most common day-to-day tasks and how they work in VCF.

§ Adding new Virtual Local Area Networks (VLANs) and assigning them to switch ports

§ Assigning routed VLAN interfaces so that the fabric can route between VLANs

§ Adding access control lists

§ Mirroring traffic

§ Setting up Simple Network Management Protocol (SNMP) to enable monitoring of the fabric

Remember that VCF is just a single, logical switch with many physical components. You handle all configuration through a single command-line interface. The VCF also appears as a single, large switch from the perspective of SNMP.

We’ll make the assumption that our VCF has the following topology, as shown in Figure 5-10.

A VCF topology

Figure 5-10. A VCF topology

Adding VLANs

The most basic task is adding new VLANs to the network in order to segment servers. The first step is to drop into configuration mode and define the VLAN:

{master:0}[edit]

root@VCF# set vlans Engineering description "Broadcast domain for Engineering

group"

{master:0}[edit]

root@VCF# set vlans Engineering vlan-id 100

{master:0}[edit]

root@VCF# set vlans Engineering l3-interface irb.100

The next step is to create a Layer 3 interface for the new Engineering VLAN so that servers have a default gateway. We’ll use the same l3-interface that was referenced in creating the Engineering VLAN:

{master:0}[edit]

root@VCF# set interfaces irb.100 family inet address 192.168.100.1/24

Now that the VLAN and its associated Layer 3 interface is ready to go, the next step is to add servers into the VLAN. Let’s make an assumption that the first QFX5100-48S is in the first rack.

When working with VCF, each switch is identified by its FPC number. An easy way to reveal a switch’s FPC number is by using the show chassis hardware command. You can identify switches by the serial number. It’s important to note that because we used the auto-provision feature in VCF, the FPC numbers are assigned chronologically as new switches are added:

{master:0}

root@VCF> show chassis hardware | match FPC

FPC 0 REV 11 650-049942 TB3714070330 QFX5100-24Q-2P

FPC 1 REV 11 650-049942 TB3714070064 QFX5100-24Q-2P

FPC 2 REV 09 650-049937 TA3713480228 QFX5100-48S-6Q

FPC 3 REV 09 650-049937 TA3713480106 QFX5100-48S-6Q

FPC 4 REV 09 650-049937 TA3713470455 QFX5100-48S-6Q

FPC 5 REV 09 650-049937 TA3713480037 QFX5100-48S-6Q

In our example, the FPC numbers are sequential, starting from 0 and ending in 5, as shown in Figure 5-11.

VCF FPC numbers

Figure 5-11. VCF FPC numbers

Now that we know that the first switch is FPC2, we can begin to assign the new Engineering VLAN to this switch. The easiest method is to create an alias for all of the 10GbE interfaces on this switch; we’ll call this alias rack-01:

{master:0}[edit]

root@VCF# set interfaces interface-range rack-01 member-range xe-2/0/0 to xe-2/0/47

{master:0}[edit]

root@VCF# set interfaces interface-range rack-01 description "Alias for all 10GE

interfaces on FPC2/rack-02"

{master:0}[edit]

root@VCF# set interfaces interface-range rack-01 unit 0 family ethernet-switching

vlan members Engineering

{master:0}[edit]

root@VCF# commit and-quit

configuration check succeeds

commit complete

Exiting configuration mode

Now the new interface alias called rack-01 is configured to include all 10GbE interfaces from xe-0/0/0 to xe-0/0/47 on the front panel. The next step is to assign the Engineering VLAN, which is done via the vlan members command.

Let’s verify our work by using the show vlans command:

{master:0}

root@VCF> show vlans

Routing instance VLAN name Tag Interfaces

default-switch Engineering 100

xe-2/0/0.0

xe-2/0/1.0

xe-2/0/12.0

xe-2/0/13.0

xe-2/0/2.0

xe-2/0/3.0

xe-2/0/4.0

xe-2/0/5.0

xe-2/0/6.0

xe-2/0/7.0

default-switch default 1

All of the interfaces that have optics in rack-01 are now assigned to the Engineering VLAN.

Let’s add another VLAN on a different switch for System Test:

{master:0}[edit]

root@VCF# set vlans Systest description "Broadcast domain for System Test"

{master:0}[edit]

root@VCF# set vlans Systest vlan-id 200

{master:0}[edit]

root@VCF# set vlans Systest l3-interface irb.200

{master:0}[edit]

root@VCF# set interfaces irb.200 family inet address 192.168.200.1/24

{master:0}[edit]

root@VCF# set interfaces interface-range rack-02 member-range xe-3/0/0 to xe-3/0/47

{master:0}[edit]

root@VCF# set interfaces interface-range rack-02 description "Alias for all 10GE

interfaces on FPC3/rack-03"

{master:0}[edit]

root@VCF# set interfaces interface-range rack-02 unit 0 family ethernet-switching

vlan members Systest

{master:0}[edit]

root@VCF# commit and-quit

configuration check succeeds

commit complete

Exiting configuration mode

We can verify that the new System Test VLAN is up and working with a couple of show commands:

{master:0}

root@VCF> show vlans

Routing instance VLAN name Tag Interfaces

default-switch Engineering 100

xe-2/0/0.0

xe-2/0/1.0

xe-2/0/12.0

xe-2/0/13.0

xe-2/0/2.0

xe-2/0/3.0

xe-2/0/4.0

xe-2/0/5.0

xe-2/0/6.0

xe-2/0/7.0

default-switch Systest 200

xe-3/0/0.0

xe-3/0/1.0

xe-3/0/12.0

xe-3/0/13.0

xe-3/0/2.0

xe-3/0/3.0

xe-3/0/4.0

xe-3/0/5.0

xe-3/0/6.0

xe-3/0/7.0

default-switch default 1

{master:0}

root@VCF> show interfaces terse | match irb

irb up up

irb.100 up down inet 192.168.100.1/24

irb.200 up down inet 192.168.200.1/24

Configuring SNMP

With the VCF configured and running, the next step is to integrate the fabric into a network monitoring program. One of the most common ways to poll information from a switch is using SNMP. Let’s set up a public community string with read-only access:

{master:0}[edit]

root@VCF# set snmp community public authorization read-only

{master:0}[edit]

root@VCF# commit and-quit

configuration check succeeds

commit complete

At this point, you can use your favorite SNMP browser or collection server and begin polling information from VCF. To confirm that SNMP is working properly, you can use the command-line tool snmpwalk and use the vme0 management IP address and the public community string.

epitaph:~ dhanks$ snmpwalk -c public 10.92.82.4 | grep SNMPv2-MIB

SNMPv2-MIB::sysDescr.0 = STRING: Juniper Networks, Inc. qfx5100-24q-2p Ethernet

Switch, kernel JUNOS 13.2-X51-D20, Build date: 2014-03-18 12:13:29 UTC Copyright

(c) 1996-2014 Juniper Networks, Inc.

...

Port Mirroring

There are various ways to mirror traffic within VCF. You define an input and an output interface. The input is a bit more flexible and supports an interface or an entire VLAN. Let’s set up a port mirror so that all ingress traffic on the Engineering VLAN is mirrored to the xe-3/0/0.0interface:

{master:0}[edit]

root@VCF# set forwarding-options analyzer COPY-ENGINEERING input ingress vlan

Engineering

root@VCF# set forwarding-options analyzer COPY-ENGINEERING output interface xe-

3/0/0.0

root@VCF# commit and-quit

configuration check succeeds

commit complete

To view and verify the creation of the analyzer, we can use the show forwarding-options analyzer command:

{master:0}

root@VCF> show forwarding-options analyzer

Analyzer name : COPY-ENGINEERING

Mirror rate : 1

Maximum packet length : 0

State : up

Ingress monitored VLANs : default-switch/Engineering

Summary

VCF is a great technology to quickly get you on your feet and build out a high-performance network that you can managed as a single switch. VCF offers three provisioning modes to suit your data center management and security needs. Taking advantage of the carrier-class Junos code to provide GRES, NSR, and NSB, Virtual Chassis Fabric can gracefully switch between routing engines during a failure without dropping your critical traffic in the data center.

Whether you’re building out a small to medium data center or a large data center with a POD architecture, VCF is a great way to easily manage your data center with a rich set of features and outstanding performance.

Chapter Review Questions

1. How many provisioning modes does VCF support?

1. 1

2. 2

3. 3

4. 4

2. How many vme interfaces are active at any given time in VCF?

1. None.

2. Only one.

3. Two. One for the master routing engine and another for the backup routing engine.

4. All of them.

3. Can you add an EX4300 switch to VCF in fabric mode?

1. Yes

2. No

4. You want to configure the first 10GbE port on the second leaf. What interface is this?

1. xe-1/1/0

2. xe-0/1/0

3. xe-1/1/1

4. xe-3/0/0

Chapter Review Answers

1. Answer: C.

VCF supports three provisioning modes: auto-provisioned, preprovisioned, and nonprovisioned.

2. Answer: B.

Only the master routing engine’s vme interface is active at any given time.

3. Answer: B.

To support the EX4300, QFX3500, or the QFX3600, VCF must be put into mixed mode.

4. Answer: D.

This is a really tricky question. It depends on how many spines there are. Using the assumption that there are two spines and four leaf switches, the FPC number of the second leaf switch would be 3. The first port would be 0. The answer would be xe-3/0/0.