Handling Traffic - CCNP Security FIREWALL 642-618 Official Cert Guide (2012)

CCNP Security FIREWALL 642-618 Official Cert Guide (2012)

Chapter 11. Handling Traffic

This chapter covers the following topics:

Handling Fragmented Traffic: This section explains how a Cisco ASA can virtually reassemble all the fragments of a packet to inspect the contents.

Prioritizing Traffic: This section explains how traffic is handled as it is passed out an ASA interface and how time-critical traffic can be prioritized for premium service.

Controlling Traffic Bandwidth: This section covers traffic policing and traffic shaping, two methods you can configure to rate limit traffic passing through an interface.

A Cisco ASA is normally busy inspecting traffic and applying security policies. It also has features that you can leverage to control how it handles packets as they pass through. Even if the packets from different traffic flows all pass the same inspection policies, the ASA can handle them differently to affect how they are forwarded. This chapter covers these features in detail.

“Do I Know This Already?” Quiz

The “Do I Know This Already?” quiz allows you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics, read the entire chapter. Table 11-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. You can find the answers in Appendix A, “Answers to the ‘Do I Know This Already?’ Quizzes.”

Table 11-1. “Do I Know This Already?” Section-to-Question Mapping

image


Caution

The goal of self-assessment is to gauge your mastery of the topics in this chapter. If you do not know the answer to a question or are only partially sure of the answer, you should mark this question wrong for purposes of the self-assessment. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security.


1. Suppose an ASA receives a series of packet fragments from a source host on the outside, destined for a host on the inside. Which one of the following answers correctly describes a possible outcome?

a. The ASA recognizes the packets as fragments and automatically drops them.

b. Each fragment is inspected individually and forwarded as appropriate.

c. The ASA virtually reassembles the fragments into their original complete packets, which are then inspected. If they pass inspection, then the whole packets are forwarded to their destination.

d. The ASA virtually reassembles the fragments into their original complete packets, which are then inspected. If the whole packets pass inspection, the original fragments are forwarded to the destination.

2. By default, how many fragments can an ASA buffer for virtual reassembly of a single packet?

a. 1

b. 2

c. 16

d. 24

e. 1024

3. Which two of the following best describe priority queuing on an ASA?

a. The ASA platform does not support priority queuing.

b. Each interface can support one LLQ.

c. Each interface can support two priority queues.

d. An interface priority queue must be enabled before it can be used.

4. Which one of the following commands should be used to enable an LLQ on an ASA interface?

a. ciscoasa(config)# interface ethernet0/0
ciscoasa(config-if)# priority-queue

b. ciscoasa(config)# priority-queue outside

c. ciscoasa(config)# llq enable

d. ciscoasa(config)# interface ethernet0/0
ciscoasa(config-if)# priority

5. Which one of the following happens to incoming packets that are classified for priority queuing if the interface priority queue is currently full?

a. The incoming packets are dropped.

b. The incoming packets are moved into the best-effort queue instead.

c. The incoming packets are buffered until the priority queue empties.

d. Nothing; the priority queue can never fill.

6. Suppose you need to configure an ASA to send packets containing streaming video into a priority queue. You have just configured a class map to match the video packets with the match command. Which one of the following correctly describes the next step you should take?

a. Enter the priority command in the class map configuration.

b. Enter the policy-map command, followed by the priority command.

c. Enter the policy-map command, followed by the class command, followed by the priority command.

d. Enter the service-policy command, followed by the priority command.

7. Which one of the following traffic handling features can hold specific traffic flows within a bandwidth limit?

a. Virtual packet reassembly

b. Priority queuing

c. Traffic policing

d. Traffic shaping

8. By default, which one of the following actions will a traffic policer take for packets that don’t conform to the bandwidth threshold limit?

a. Drop the packets.

b. Transmit the packets.

c. Delay the packets.

d. Inspect the packets.

9. Which one of the following is a valid MPF configuration to implement traffic shaping?

a. class-map test-class
match any
policy-map test
class test-class
shape average 10000000

b. class-map test-class
match access-list test-acl
policy-map test
class test-class
shape average 10000000

c. policy-map test
class class-default
shape average 10000000

d. policy-map test
shape average 10000000

10. Which of the following combinations of traffic handling features can be configured simultaneously on the same ASA interface?

a. Interface priority queuing and traffic shaping.

b. Interface priority queuing and traffic policing.

c. Traffic policing and traffic shaping.

d. None of these answers are correct.

Foundation Topics

Packets coming into an ASA may be fragmented or whole. The same security policies that inspect whole packets aren’t as effective when inspecting fragments. An ASA can be configured to intercept packet fragments and virtually reassemble them so that they can be inspected normally.

An ASA can also be configured to identify certain traffic types so that they can be handled in a more efficient manner than is normally done. This allows time- or mission-critical packets to be forwarded ahead of other packets after inspection.

You can also configure an ASA to control the amount of bandwidth used by certain types of traffic. Traffic policing and shaping are two methods to hold traffic bandwidth within predefined limits.

In this chapter, you learn how to configure the traffic handling features.

Handling Fragmented Traffic

When an ASA sends or receives a packet, you might think that the whole packet moves along as a single unit of data. This is true as long as the ASA interface is configured to handle units of data that are at least as large as the packet. This same principle applies to any device that is connected to a network, including hosts, routers, and switches. The maximum size is called the maximum transmission unit (MTU) and is configured on a per-interface basis. MTU configuration is covered in the “Configuring the Interface MTU” section of Chapter 3, “Configuring ASA Interfaces.”

By default, any Ethernet interface has its MTU set to 1500 bytes, which is the normal maximum and expected value for Ethernet frames. If a packet is larger than the MTU, it must be fragmented before being transmitted. The resulting fragments are then sent individually; once they arrive at the destination, the fragments are reassembled into the original complete packet.

You can verify the interface MTU settings with the show running-config mtu command. If you find that the default MTU value of 1500 needs to be adjusted, you can set the interface MTU from 64 to 65,535 bytes. Be aware that 9216 bytes is a common practical limit known as a “giant” packet. In ASDM, navigate to Configuration > Device Setup > Interfaces > Edit > Advanced and enter the new MTU value. In the CLI, you can use the following configuration command:

ciscoasa(config)# mtu interface bytes

Cisco ASAs can participate in MTU discovery along an end-to-end IP routing path. This process follows RFC 1191, where the source and destination are expected to use an MTU value equal to the smallest allowed MTU along the complete path.

What happens when an ASA receives packets that have already been fragmented? Rather than passing the fragments along toward their destination, an ASA will inspect the fragments to make sure that they aren’t part of some malicious activity. To do this, the ASA must store each fragment in a cache and virtually reassemble the fragments so that it can inspect the complete original packet and verify the order and integrity of each fragment. If the reassembled packet passes inspection, then the ASA discards the packet and forwards all of the original fragments toward the destination—as if nothing had happened to them.

Naturally, an ASA has to limit the resources it uses for the virtual packet reassembly process. Otherwise, someone could send an endless stream of fragmented packets and exhaust the ASA’s memory. Virtual packet reassembly is limited in the following ways by default:

image

• A maximum of 200 unique packets that can be reassembled, per interface

• A maximum of 24 fragments for a single packet

• A maximum time of 5 seconds for all fragments of a packet to arrive

You can also configure virtual packet reassembly from Cisco Adaptive Security Device Manager (ASDM). Select Configuration > Firewall > Advanced > Fragment. Select an interface from the list and then click the Edit button to change any of the virtual reassembly parameters, as shown in Figure 11-1. You can also click the Show Fragment button to display the reassembly counters for each interface.

image

Figure 11-1. Tuning Virtual Fragment Reassembly in ASDM

You can make similar virtual packet reassembly adjustments from the CLI with the commands listed in Table 11-2.

Table 11-2. Commands Used to Configure Virtual Packet Reassembly Limits

image

You can monitor an ASA’s fragmentation activity with the show fragment EXEC command. In Example 11-1, the outside interface has the default fragment settings (database size 200 packets, chain limit 24 fragments, and timeout limit 5 seconds).

Example 11-1. Displaying Virtual Reassembly Activity


ciscoasa# show fragment outside
Interface: outside
Size: 200, Chain: 24, Timeout: 5, Reassembly: virtual
Queue: 2, Assemble: 2562, Fail: 972, Overflow: 713
ciscoasa#


The output shows that the ASA has reassembled 2562 packets, and two packets are awaiting reassembly. The output also shows that the reassembly process has failed 972 times. This is because the timeout limit expired while the process was waiting for all fragments to arrive. The process has also had overflow conditions, indicating that for 712 different packets, more than 24 fragments arrived and overflowed the packet buffer.

Prioritizing Traffic

Sometimes, an ASA can inspect and prepare to send packets faster than they can be transmitted on an interface. Each ASA interface has an output queue or buffer that stores outbound packets temporarily until they can be transmitted.

This simple queue structure makes for simple interface operation. The first packet put into the queue is the first one that is taken out and transmitted. There is no differentiation between types of traffic or there are no quality of service (QoS) requirements. Regardless of the packet contents, packets leave the queue in the same order they went into it.

image

Packets that are placed into the queue are sent in a best-effort fashion This means that the ASA will make its best effort to send the next packet it finds in the interface queue. In fact, this queue is known as a best-effort queue (BEQ), as shown in Figure 11-2. An HTTP packet was the first to arrive in the queue, followed by an RTP packet, and then an SMTP packet.

image

Figure 11-2. Best-Effort Queue Operation on an ASA Interface

A BEQ usually works fine for common types of traffic; however, it presents a problem for time-critical data that might pass through an ASA. For example, any type of streaming audio or video must be forwarded in a predictable manner so that packets aren’t delayed too much before they reach their destination. Those packets also need to be forwarded at a fairly regular rate; too much variation in packet-to-packet delay (jitter) results in poor-quality audio or video at the destination.

In Figure 11-2, suppose the RTP packets contain pieces of a real-time audio stream. The first RTP packet is near the head end of the queue, almost ready to be transmitted. The next RTP packet is just arriving and will be placed at the tail end of the queue.

Ideally, the RTP packets should be transmitted as closely together as possible to preserve the real-time nature of the contents. When streaming data is mixed with other types of high-volume data passing through a firewall, however, the nonstreaming data can starve the streaming data flow. This can happen simply because the streaming packets get lost in a sea of other packets competing for transmission time.

image

To help deliver time-critical traffic more efficiently, an ASA can also maintain one priority or low-latency queue (LLQ) on each of its interfaces. Packets are placed in this queue only when they match specific criteria. Any packets in the LLQ are transmitted ahead of any packets in the BEQ, providing priority service. Figure 11-3 demonstrates this concept, where only the RTP packets are being classified and then sent into the LLQ. The ASA always services the LLQ first, so any RTP packets found there are immediately transmitted ahead of anything found in the BEQ.

image

Figure 11-3. Low-Latency Queue Operation on an ASA Interface

If either the BEQ or LLQ fills during a time of interface congestion, any other packets destined for the queue are simply dropped. In addition, there is no crossover or fallback between queues. For example, if the LLQ is full, subsequent priority packets are not placed in the BEQ; they are dropped instead.

Both the BEQ and the LLQ are queues maintained in software. In addition, an ASA uses a hardware queue called the transmit ring to buffer packets that will be copied directly to the physical interface hardware for transmission. Packets are pulled from the LLQ first, and then the BEQ, and then they are placed in the hardware queue of the egress interface.

You can configure priority queuing through ASDM. Select Configuration > Device Management > Advanced > Priority Queue. By default, only a BEQ is enabled and used on each interface. You must specifically enable an LLQ by clicking the Add button and selecting the interface name, as shown in Figure 11-4. You can also tune the queue limit and the transmission ring limit, if needed.

image

Figure 11-4. Enabling and Tuning an Interface Priority Queue

As soon as the priority queue is enabled for the first time, the queue depth limit is set to a calculated default value. The limit is the number of 256-byte packets that can be transmitted on the interface over a 500-ms period. Naturally, the default value varies according to the interface speed, but it always has a maximum value of 2048 packets.

The queue limit value in packets (1 to 2048) varies according to the amount of ASA memory and the interface speed. In addition, packets can vary in size, but the queue is always measured in generic packets, which can be up to the interface MTU (default 1500 bytes) bytes long.

Similarly, as soon as the interface priority queue is enabled for the first time, the transmit ring limit is set to a calculated default value. The limit is the number of 1500-byte packets that can be transmitted on the interface in a 10-ms period. The packets limit has a minimum of 3 and a maximum that varies according to the interface and available memory.

Next, define a service policy rule by selecting Configuration > Firewall > Service Policy Rules, as described in Chapter 9, “Inspecting Traffic.” You can either add a new rule or edit an existing one. When you get to the Rule Actions dialog box, be sure to check the Enable Priority for This Flow check box, as shown in Figure 11-5.

image

Figure 11-5. Defining Priority Queuing as an MPF Action in ASDM

You can configure priority queuing with the CLI by using the following sequence of steps:

Step 1. Enable the priority queue on an interface:

ciscoasa(config)# priority-queue interface

Step 2. Tune the interface queues:

Use the commands listed in Table 11-3 to set the depth of both the BEQ and LLQ and to set the transmit ring queue depth in packets.

Table 11-3. Commands Used to Tune Interface Queues

image

To see the current interface queue values, you can use the show priority-queue config command, as shown in Example 11-2.

Example 11-2. Displaying the Current Interface Queue Sizes


ciscoasa# show priority-queue config
Priority-Queue Config interface outside
current default range
queue-limit 2048 2048 0 - 2048
tx-ring-limit 512 512 3 - 512

Priority-Queue Config interface inside
current default range
queue-limit 0 2048 0 - 2048
tx-ring-limit -1 512 3 - 512


Notice that the outside interface is shown with current values of 2048 and 512, which are the same as the default values. However, the inside interface has values 0 and –1. The difference is that the LLQ on the outside interface has been enabled with the priority-queue outsidecommand. The inside interface is still using its default BEQ because its LLQ has not been enabled. As a result, the inside priority queue has a queue-limit, or depth, of zero packets; the –1 value for tx-ring-limit indicates that it is currently disabled.

Step 3. Configure the MPF to use the LLQ: By default, all packets are sent to the best-effort queue, regardless of whether a priority queue has been configured and enabled. To send packets to the priority queue, you must use the Modular Policy Framework (MPF) to configure a service policy that matches specific traffic with a class map and then assigns that traffic to the priority queue. MPF configuration is covered in Chapter 9.

Use Example 11-3 as a guideline for your MPF configuration. Use the priority command as the action to send the matched traffic into an LLQ. Actually, the packets are marked to be destined for only a generic priority queue. When the ASA is ready to forward them, they will be placed into the priority queue on the appropriate interface.

Example 11-3. MPF Structure for Sending Matched Packets into an LLQ


ciscoasa(config)# class-map class_map_name
ciscoasa(config-cmap)# match condition
ciscoasa(config-cmap)# exit

ciscoasa(config)# policy-map policy_map_name
ciscoasa(config-pmap)# class class_map_name
ciscoasa(config-pmap-c)# priority
ciscoasa(config-pmap-c)# exit
ciscoasa(config-pmap)# exit

ciscoasa(config)# service-policy policy_map_name interface interface


Controlling Traffic Bandwidth

Priority queuing can give premium service to specific types of traffic without any real regard for the bandwidth being used. If packets sent into a priority queue are always forwarded ahead of any other traffic, the priority queue shouldn’t be affected by other traffic flows that might use up significant bandwidth.

Aside from the priority queue, all traffic flows passing through an ASA have to compete for the available bandwidth on an interface. By default, there are no limits on bandwidth usage. This configuration might be fine in many scenarios, where traffic flows are serviced on a “best effort” basis. However, suppose one user initiates a peer-to-peer file sharing connection that transfers a huge amount of data as fast as possible. This one traffic flow might starve out other important or mission-critical flows until it completes. In that case, you might want to have a way to keep runaway bandwidth usage in check.

You can leverage two ASA features to control or limit the amount of bandwidth used by specific traffic flows:

• Traffic policing

• Traffic shaping

With either method, the ASA measures the bandwidth used by traffic that is classified by a service policy and then attempts to hold the traffic within a configured rate limit. However, each method accomplishes the bandwidth control in a different manner.

image

With traffic policing, the packets are forwarded normally as long as the bandwidth threshold is not exceeded. However, packets that do exceed the bandwidth threshold are simply dropped. Figure 11-6 illustrates this process, where the dashed line represents the original traffic flow and the solid line shows the resulting policed traffic.

image

Figure 11-6. Effects of Traffic Policing

Notice how the traffic pattern is not changed by the policer, as the traffic rate still rises and falls; only the peaks that would have risen above the threshold are missing because those packets have been dropped. This process doesn’t introduce any latency or jitter to the original traffic flow as long as the traffic conforms or stays below the police threshold limit. It can lead to TCP retransmissions if TCP packets are among those dropped, however.

image

In contrast, traffic shaping takes a more preemptive approach. Traffic is buffered before it is forwarded so that the traffic rate can be shaped or held within the threshold limit. The idea is to pull packets from the buffer at a rate that is less than the threshold so that no packets are dropped.Figure 11-7 illustrates this process; again, the dashed line shows the original traffic and the solid line shows the resulting shaped traffic.

image

Figure 11-7. Effects of Traffic Shaping

Notice how traffic shaping has smoothed out any rises and falls of the original traffic rate. Although this process smoothes out the traffic flow and holds it within the threshold, it also introduces a variable delay or jitter as the packets are buffered and then forwarded at varying times. Because packets are not normally dropped during shaping, TCP retransmissions are minimized.

Traffic shaping can be performed only on outbound traffic on an interface. In addition, traffic shaping operates on the bulk traffic passing through an interface rather than on specific traffic matched by a class map. This property makes traffic shaping handy when a high-speed ASA interface is connected to a lower-speed device, such as a broadband Internet service. In that case, the traffic shaping threshold can be configured to match the lower-speed bandwidth.

Configuring a Traffic Policer

To use ASDM to configure traffic policing, begin by navigating to Configuration > Firewall > Service Policy Rules and adding a new service policy rule or editing an existing one. Define a matching condition that will classify the traffic that will be policed. Next, click the QoS tab in the Rule Action dialog box, as shown in Figure 11-8.

image

Figure 11-8. Configuring a Traffic Policer in ASDM

Check the Enable Policing check box, and then choose either Input Policing or Output Policing. Packets that are matched and policed are held to a strict bandwidth policy. You can enter the bandwidth limit threshold as the Committed Rate, given in bits per second, as 8000 (8 kbps) to 2,000,000,000 (2 Gbps). As long as the bandwidth is not exceeding the committed rate, the ASA takes the Conform Action and either drops the conforming packets or transmits them (the default).

You can also specify an “instantaneous” amount of burst traffic that is allowed when the conform rate is exceeded. This is given as the Burst Size, from 1000 (1 KB) to 512,000,000 (512 MB), with a default of 1500 bytes. If the committed rate is exceeded by more than the burst size, the traffic is considered nonconforming, and the Exceed Action is taken. The ASA can either drop (the default) or transmit the nonconforming packets.

It might seem odd that the conform rate is specified in bits per second while the burst is given in bytes, but that is how the policer operates. A 10-ms clock interval is used to measure policed traffic. The byte counts of matching packets are added to a “bucket” whose “high-water mark” is set to the amount of traffic that can be transmitted in one clock tick. In addition, the bucket is emptied at every interval of the policer clock (10 ms).

As long as the committed rate is not exceeded, the bucket should never fill. If a burst size is configured, it is added to the bucket’s high-water mark. Therefore, in one clock tick (10 ms), the amount of matching traffic can exceed the conforming amount by the burst size in bytes.

As an example, suppose you need to configure a policer to limit outbound HTTP traffic to an aggregate rate of 100 Mbps. Conforming traffic should be forwarded normally, but traffic that exceeds the conform rate should be dropped. The HTTP servers are located on the inside of the ASA, and all relevant clients are located outside. Figure 11-8 shows the values you could use to configure the service policy.

Finally, click the Finish button to complete the service policy rule. The new rule and the policer action will be shown in the summary list, as shown in Figure 11-9.

image

Figure 11-9. Verifying the Traffic Policer Policy Rule Action

If you choose to use the CLI instead, you can configure traffic policing by defining a service policy using the MPF. Be sure to match the traffic that will be policed by defining a class map. Reference the class map in a policy map and define the policy action to be one of the commands listed in Table 11-4. The conform_rate in the CLI command is identical to the committed rate in ASDM.

Table 11-4. Command Syntax for Traffic Policing or Shaping

image

Chapter 9 covers MPF configuration in greater detail. Use Example 11-4 as a guideline for your MPF configuration. Use the police command as the action to send the matched traffic into a traffic policer.

Example 11-4. MPF Structure for Traffic Policing


ciscoasa(config)# class-map class_map_name
ciscoasa(config-cmap)# match condition
ciscoasa(config-cmap)# exit

ciscoasa(config)# policy-map policy_map_name
ciscoasa(config-pmap)# class class_map_name
ciscoasa(config-pmap-c)# police ...
ciscoasa(config-pmap-c)# exit
ciscoasa(config-pmap)# exit

ciscoasa(config)# service-policy policy_map_name interface interface


Example 11-5 lists the commands you can use to configure a service policy to implement the scenario used in Figure 11-8.

Example 11-5. Configuring a Traffic Policer to Control Outbound HTTP Traffic


ciscoasa(config)# access-list outbound_http extended permit tcp any eq http any
ciscoasa(config)# class-map class_http
ciscoasa(config-cmap)# match access-list outbound_http
ciscoasa(config-cmap)# exit
!
ciscoasa(config)# policy-map mypolicy
ciscoasa(config-pmap)# class class_http
ciscoasa(config-pmap-c)# police output 100000000 conform-action transmit exceed-
action drop
ciscoasa(config-pmap-c)# exit
ciscoasa(config-pmap)# exit
!
ciscoasa(config)# service-policy mypolicy interface outside


You can verify traffic policer operation by entering the show service-policy police command, as demonstrated in Example 11-6. From the command output, you can see that the bandwidth threshold (“cir,” or committed information rate) is 100 Mbps and the burst size is 50,000 bytes. The command output also shows current estimates of the bits per second for traffic that conformed or exceeded the threshold.

Example 11-6. Displaying Information About Traffic Policing


ciscoasa# show service-policy police
Interface outside:
Service-policy: outside-policy
Class-map: class-http
Output police Interface outside:
cir 100000000 bps, bc 50000 bytes
conformed 1431 packets, 351327 bytes; actions: transmit
exceeded 339 packets, 53871 bytes; actions: drop
conformed 50002 bps, exceed 42556 bps
ciscoasa#


Configuring Traffic Shaping

To configure traffic shaping in ASDM, begin by adding a new service policy rule or editing an existing one. Traffic shaping doesn’t shape specific matched traffic; it shapes the default traffic that isn’t matched or classified by any other traffic class. Therefore, you have to use the class-default class map to match the traffic. This is done by selecting the Use Class-Default As the Traffic Class option in the Traffic Classification Criteria dialog box, as shown in Figure 11-10.

image

Figure 11-10. Using the Class-Default Traffic Class to Match Traffic for Shaping

Next, click the QoS tab in the Rule Actions dialog box and check Enable Traffic Shaping as the policy action, as shown in Figure 11-11. (If you chose any matching criteria other than class-default, the Enable Traffic Shaping option will not be shown.)

image

Figure 11-11. Configuring Traffic Shaping in ASDM

Traffic shaping buffers packets and attempts to hold the interface bandwidth close to an average rate. You can set the Average Rate parameter in bits per second, from 64,000 to 154,400,000, in multiples of 8000 bps.

You can also specify a burst size in bits, which determines the amount of traffic that can be sent in excess of the average rate. The burst size is automatically calculated by default, based on the average rate you configure, but is shown blank in ASDM. The default size is based on how much traffic can be sent in a 4-ms time period at the average rate. Normally, the default value is optimal, so you should not have to specify a burst size. If you do decide to set it explicitly, the burst-size parameter can range from 2048 to 154,400,000 bits.

Can you configure both priority queuing and traffic shaping, so that some traffic will be handled ahead of the rest of the traffic that will be shaped? In a nutshell, no—an ASA does not support both features configured on the same interface. However, the ASDM traffic shaping configuration does support a “hierarchical priority queuing” if you check the Enforce Priority to Selected Shaped Traffic check box under the Average Rate and Burst Size parameters.

Once the option is selected, you can click the Configure button to define a matching criteria in a subsequent dialog box, as shown in Figure 11-12. This criteria will be used to match traffic being sent into the traffic shaper. The matched traffic will be given priority service within the traffic shaping process, but the packets are not sent into an interface priority queue.

image

Figure 11-12. Configuring a Matching Criteria for Priority Handling in the Traffic Shaper

You can also use the CLI to configure traffic shaping. Use Example 11-7 as a guideline for your MPF configuration. Use the shape command as the action to send the matched traffic into a traffic shaper. Traffic shaping can be applied only to the bulk amount of traffic passing through an interface. Therefore, the matching condition you enter into the policy map configuration is important. The only permissible command is class class-default, followed by the shape command action.

Example 11-7. MPF Structure for Traffic Shaping


ciscoasa(config)# class-map class_map_name
ciscoasa(config-cmap)# match condition
ciscoasa(config-cmap)# exit
ciscoasa(config)# policy-map policy_map_name
ciscoasa(config-pmap)# class class-default
ciscoasa(config-pmap-c)# shape ...
ciscoasa(config-pmap-c)# exit
ciscoasa(config-pmap)# exit

ciscoasa(config)# service-policy policy_map_name interface interface


In Example 11-8, traffic shaping is configured for an average rate of 100 Mbps. Notice that no class map has been defined. Instead, the policy map uses the class-default class map to match against all traffic. This scenario matches the one configured in ASDM in Figure 11-11.

Example 11-8. Configuring Traffic Shaping


ciscoasa(config)# policy-map outside-policy
ciscoasa(config-pmap)# class class-default
ciscoasa(config-pmap-c)# shape average 100000000
ciscoasa(config-pmap-c)# exit
ciscoasa(config-pmap)# exit
!
ciscoasa(config)# service-policy outside-policy interface outside


You can verify traffic shaper operation by entering the show service-policy shape command, as shown in Example 11-9. The command output shows that the bandwidth average threshold is 100 Mbps and the burst size is 400,000 bits. The traffic shaper is using a buffer queue that can hold 1666 packets waiting to be shaped and forwarded.

Example 11-9. Displaying Information About Traffic Shaping


ciscoasa# show service-policy shape
Interface outside:
Service-policy: outside-policy
Class-map: class-default


shape (average) cir 100000000, bc 400000
Queueing
queue limit 1666 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
ciscoasa#


Exam Preparation Tasks

As mentioned in the section, “How to Use This Book,” in the Introduction, you have a couple of choices for exam preparation: the exercises here, Chapter 17, “Final Preparation,” and the exam simulation questions on the CD-ROM.

Review All Key Topics

Review the most important topics in this chapter, noted with the Key Topic icon in the outer margin of the page. Table 11-5 lists a reference of these key topics and the page numbers on which each is found.

image

Table 11-5. Key Topics for Chapter 11

image

Define Key Terms

Define the following key terms from this chapter and check your answers in the glossary:

virtual reassembly

maximum transmission unit (MTU)

best-effort queue (BEQ)

low-latency queue (LLQ)

traffic policing

traffic shaping

Command Reference to Check Your Memory

This section includes the most important configuration and EXEC commands covered in this chapter. It might not be necessary to memorize the complete syntax of every command, but you should be able to remember the basic keywords that are needed.

To test your memory of the commands, cover the right side of Tables 11-6 through 11-8 with a piece of paper, read the description on the left side, and then see how much of the command you can remember.

Table 11-6. Commands Related to ASA Fragment Handling

image

Table 11-7. Commands Related to Priority Handling

image

Table 11-8. Commands Related to Controlling Traffic Bandwidth

image

The FIREWALL exam focuses on practical, hands-on skills that are used by a networking professional. Therefore, you should be able to identify the commands needed to configure and test an ASA feature.