Priorityqueuedrops
Priorityqueuedrops
Queueing
Document ID: 10105
Contents
Introduction
Prerequisites
Requirements
Components Used
Conventions
Drops with ip rtp priority and LLQ
Drops with Legacy Priority Queueing
Traffic Measurement with a Token Bucket
Troubleshooting Steps to Diagnose Drops
Step 1 − Collect Data
Step 2 − Ensure Sufficient Bandwidth
Step 3 − Ensure Sufficient Burst Size
Step 4 − debug priority
Other Causes for Drops
Priority Queues Drops and Frame Relay
Related Information
Introduction
This document provides tips on how to troubleshoot output drops that result from a priority queueing
mechanism configuration on a router interface.
Prerequisites
Requirements
Readers of this document should be familiar with these concepts:
A router can report output drops when any of these methods are configured, but there are important functional
differences between the methods and the reason for drops in each case.
The information presented in this document was created from devices in a specific lab environment. All of the
devices used in this document started with a cleared (default) configuration. If you are working in a live
network, ensure that you understand the potential impact of any command before you use it.
Components Used
This document is not restricted to specific software and hardware versions.
The information presented in this document was created from devices in a specific lab environment. All of the
devices used in this document started with a cleared (default) configuration. If you are working in a live
network, ensure that you understand the potential impact of any command before using it.
Conventions
For more information on document conventions, refer to Conventions Used in Cisco Technical Tips.
• ip rtp priority: Because the ip rtp priority command gives absolute priority over other traffic, it
should be used with care. In the event of congestion, if the traffic exceeds the configured bandwidth,
then all the excess traffic is dropped.
• priority command and LLQ: When you specify the priority command for a class, it takes a
bandwidth argument that gives maximum bandwidth. In the event of congestion, policing is used to
drop packets when the bandwidth is exceeded.
These two mechanisms use a built−in policer to meter the traffic flows. The purpose of the policer is to ensure
that the other queues are serviced by the queueing scheduler. In the cisco original priority queueing feature,
which uses the priority−group and priority−list commands, the scheduler always serviced the
highest−priority queue first. If there was always traffic in the high priority queue, the lower−priority queues
were starved of bandwidth and packets going to non−priority queues.
Enabling priority queueing on an interface changes the Output queue display, as illustrated below. Before
priority queueing the Ethernet interface is using a single output hold queue with the default queue size of 40
packets.
After enabling PQ the Ethernet interface now is using four priority queues with varying queue limits, as
shown in the output below:
The priority−list {list−number} command is used to assign traffic flows to a specific queue. As the packets
arrive at an interface, the priority queues on that interface are scanned for packets in a descending order of
priority. The high priority queue is scanned first, then the medium priority queue, and so on. The packet at the
head of the highest priority queue is chosen for transmission. This procedure is repeated every time a packet is
to be sent.
Each queue is defined by a maximum length or by the maximum number of packets the queue can hold. When
an arriving packet would cause the current queue depth to exceed the configured queue limit, the packet is
dropped. Thus, as noted above, output drops with PQ typically are due to exceeding the queue limit and not to
an internal policer, as is the typical case with LLQ. The priority−list list−number queue−limit command
changes the size of a priority queue.
A token bucket itself has no discard or priority policy. The token bucket metaphor works along the following
lines:
Let's look at an example using packets and a committed information rate (CIR) of 8000 bps.
• In this example, the initial token buckets starts full at 1000 bytes.
• When a 450 byte packet arrives, the packet conforms because enough bytes are available in the
conform token bucket. The packet is sent, and 450 bytes are removed from the token bucket, leaving
550 bytes.
• When the next packet arrives 0.25 seconds later, 250 bytes are added to the token bucket ((0.25 *
8000)/8), leaving 700 bytes in the token bucket. If the next packet is 800 bytes, the packet exceeds,
and is dropped. No bytes are taken from the token bucket.
1. Execute the following commands several times and determine how quickly and how often the drops
increment. Use the output to establish a baseline of your traffic patterns and traffic levels. Figure out
what the "normal" drop rate is on the interface.
Note: The following example output displays matching values for the "packets" and "pkts
matched" counters. This condition indicates that a large number of packets are being process
switched or that the interface is experiencing extreme congestion. Both of these conditions
can lead to exceeding a class's queue limit and should be investigated.
Serial4/0
It is always safest to allocate slightly more than the known required amount of bandwidth to the priority
queue. For example, suppose you allocated 24 kbps bandwidth, the standard amount required for voice
transmission, to the priority queue. This allocation seems safe because transmission of voice packets occurs at
a constant bit rate. However, because the network and the router or switch can use some of the bandwidth to
produce jitter and delay, allocating slightly more than the required amount of bandwidth (such as 25 kbps)
ensures constancy and availability.
The bandwidth allocated for a priority queue always includes the Layer 2 encapsulation header. It does not
include the cyclic redundancy check (CRC). (Refer to What Bytes Are Counted by IP to ATM CoS
Queueing? for more information.) Although it is only a few bytes, the CRC imposes an increasing impact as
traffic flows include a higher number of small packets.
In addition, on ATM interfaces, the bandwidth allocated for a priority queue does not include the following
ATM cell tax overhead:
• Any padding by the segmentation and reassembly (SAR) to make the last cell of a packet an even
multiple of 48 bytes.
• 4−byte CRC of the ATM Adaptation Layer 5 (AAL5) trailer.
• 5−byte ATM cell header.
When you calculate the amount of bandwidth to allocate for a given priority class, you must account for the
fact that Layer 2 headers are included. When ATM is used, you must account for the fact that ATM cell tax
overhead is not included. You must also allow bandwidth for the possibility of jitter introduced by network
devices in the voice path. Refer to the Low Latency Queueing Feature Overview.
When using priority queueing to carry VoIP packets, refer to Voice over IP − Per Call Bandwidth
Consumption.
• The bucket is filled up with tokens based on the class rate to a maximum of the burst parameter.
• If the number of tokens is greater than or equal to packet size, the packet is sent, and the token bucket
is decremented. Otherwise, the packet is dropped.
The default burst value of LLQ's token bucket traffic meter is computed as 200 milliseconds of traffic at the
configured bandwidth rate. In some cases, the default value is inadequate, particularly when TCP traffic is
going into the priority queue. TCP flows are typically bursty and may require a burst size greater than the
default assigned by the queueing system, particularly on slow links.
The following sample output was generated on an ATM PVC with a sustained cell rate of 128 kbps. The
queueing system adjusts the burst size as the value specified with the priority command changes.
Use the burst parameter with the priority command to increase the burst value from 1600 bytes to 3200 bytes.
policy−map AV
class AV
priority percent 50 3200
Note: A high value increases the effective bandwidth that the priority class may use and may give the
appearance that the priority classes are getting more than their fair share of the bandwidth.
In addition, the queueing system originally assigned an internal queue limit of 64 packets to the low−latency
queue. In some cases, when a burst of 64 packets arrived at the priority queue, the traffic meter would
determine that the burst conformed to the configured rate, but the number of packets exceeded the queue limit.
As a result, some packets were tail−dropped. Cisco bug ID CSCdr51979 (registered customers only) resolves
this problem by allowing the priority queue size to grow as deep as allowed by the traffic meter.
The following output was captured on a Frame Relay PVC configured with a CIR of 56 kbps. In the first set
of sample output, the combined offered rate of the c1 and c2 classes is 76 kbps. The reason is that the
calculated values of offered rates minus drop rates do not represent the actual transmission rates and are not
including packets sitting in the shaper before they are transmitted.
Service−policy output: p
Class−map: c1 (match−all)
7311 packets, 657990 bytes
30 second offered rate 68000 bps, drop rate 16000 bps
Match: ip precedence 1
Weighted Fair Queueing
Strict Priority
Output Queue: Conversation 24
Bandwidth 90 (%)
Bandwidth 50 (kbps) Burst 1250 (Bytes)
(pkts matched/bytes matched) 7311/657990
(total drops/bytes drops) 2221/199890
Class−map: c2 (match−all)
7311 packets, 657990 bytes
30 second offered rate 68000 bps, drop rate 44000 bps
Match: ip precedence 2
Weighted Fair Queueing
Output Queue: Conversation 25
Bandwidth 10 (%)
Bandwidth 5 (kbps) Max Threshold 64 (packets)
(pkts matched/bytes matched) 7310/657900
(depth/total drops/no−buffer drops) 64/6650/0
In this second set of output, the show policy−map interface counters have normalized. On the 56 kbps PVC,
the class c1 is sending about 50 kbps, and the class c2 is sending about 6 kbps.
Service−policy output: p
Class−map: c1 (match−all)
15961 packets, 1436490 bytes
30 second offered rate 72000 bps, drop rate 21000 bps
Match: ip precedence 1
Weighted Fair Queueing
Strict Priority
Output Queue: Conversation 24
Bandwidth 90 (%)
Bandwidth 50 (kbps) Burst 1250 (Bytes)
(pkts matched/bytes matched) 15961/1436490
(total drops/bytes drops) 4864/437760
Class−map: c2 (match−all)
15961 packets, 1436490 bytes
30 second offered rate 72000 bps, drop rate 66000 bps
Match: ip precedence 2
Weighted Fair Queueing
Output Queue: Conversation 25
Bandwidth 10 (%)
Bandwidth 5 (kbps) Max Threshold 64 (packets)
(pkts matched/bytes matched) 15960/1436400
(depth/total drops/no−buffer drops) 64/14591/0
Caution: Before you use debug commands, refer to Important Information on Debug Commands. The
debug priority command may print a large amount of disruptive debug output on a production router. The
amount depends on the level of congestion.
In the following debug priority output, 64 indicates the actual priority queue depth at the time the packet was
dropped.
• Increasing the weighted random early detection (WRED) maximum threshold values on another class
depleted the available buffers and led to drops in the priority queue. To help diagnose this problem, a
"no−buffer drops" counter for the priority class is planned for a future release of IOS.
• If the input interface's queue limit is smaller than the output interface's queue limit, packet drops shift
to the input interface. These symptoms are documented in Cisco bug ID CSCdu89226 (registered
customers only) . Resolve this problem by sizing the input queue and the output queue appropriately
to prevent input drops and allow the outbound priority queueing mechanism to take effect.
• Enabling a feature that is not supported in the CEF−switched or fast−switched path causes a large
number of packets to be process−switched. With LLQ, process−switched packets are policed
currently whether or not the interface is congested. In other words, even if the interface is not
congested, the queueing system meters process−switched packets and ensures the offered load does
not exceed the bandwidth value configured with the priority command. This problem is documented
in Cisco bug ID CSCdv86818 (registered customers only) .
Related Information
• QoS Support Page
• Cisco IOS Software Release 12.2 Traffic Policing Feature Module
• Technical Support − Cisco Systems