The following example shows how to use the
match
precedence command to manage IPv6 traffic flows:
Router# configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
Router(config)# class-m c1
Router(config-cmap)# match precedence 5
Router(config-cmap)# end
Router#
Router(config)# policy p1
Router(config-pmap)# class c1
Router(config-pmap-c)# police 10000 conform set-prec-trans 4
To verify that packet marking is working as expected, use the
show
policy command. The output of this command shows a difference in the number of total packets versus the number of packets marked.
Router# show policy p1
Policy Map p1
Class c1
police 10000 1500 1500 conform-action set-prec-transmit 4 exceed-action drop
Router# configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
Router(config)# interface serial 4/1
Router(config-if)# service out p1
Router(config-if)# end
Router# show policy interface s4/1
Serial4/1
Service-policy output: p1
Class-map: c1 (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: precedence 5
police:
10000 bps, 1500 limit, 1500 extended limit
conformed 0 packets, 0 bytes; action: set-prec-transmit 4
exceeded 0 packets, 0 bytes; action: drop
conformed 0 bps, exceed 0 bps violate 0 bps
Class-map: class-default (match-any)
10 packets, 1486 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
During periods of transmit congestion at the outgoing interface, packets arrive faster than the interface can send them.
It is helpful to know how to interpret the output of the
show
policy-map
interface command, which is useful for monitoring the results of a service policy created with Cisco’s MQC.
Congestion typically occurs when a fast ingress interface feeds a relatively slow egress interface. Functionally, congestion
is defined as filling the transmit ring on the interface (a ring is a special buffer control structure). Every interface supports
a pair of rings: a receive ring for receiving packets and a transmit ring for sending packets. The size of the rings varies
with the interface controller and with the bandwidth of the interface or virtual circuit (VC). As in the following example,
use the
show
atm
vc
vcd command to display the value of the transmit ring on a PA-A3 ATM port adapter.
Router# show atm vc 3
ATM5/0.2: VCD: 3, VPI: 2, VCI: 2
VBR-NRT, PeakRate: 30000, Average Rate: 20000, Burst Cells: 94
AAL5-LLC/SNAP, etype:0x0, Flags: 0x20, VCmode: 0x0
OAM frequency: 0 second(s)
PA TxRingLimit: 10
InARP frequency: 15 minutes(s)
Transmit priority 2
InPkts: 0, OutPkts: 0, InBytes: 0, OutBytes: 0
InPRoc: 0, OutPRoc: 0
InFast: 0, OutFast: 0, InAS: 0, OutAS: 0
InPktDrops: 0, OutPktDrops: 0
CrcErrors: 0, SarTimeOuts: 0, OverSizedSDUs: 0
OAM cells received: 0
OAM cells sent: 0
Status: UP
Cisco software (also referred to as the Layer 3 processor) and the interface driver use the transmit ring when moving packets
to the physical media. The two processors collaborate in the following way:
-
The interface sends packets according to the interface rate or a shaped rate.
-
The interface maintains a hardware queue or transmit ring, where it stores the packets waiting for transmission onto the
physical wire.
-
When the hardware queue or transmit ring fills, the interface provides explicit back pressure to the Layer 3 processor system.
It notifies the Layer 3 processor to stop dequeuing packets to the interface’s transmit ring because the transmit ring is
full. The Layer 3 processor now stores the excess packets in the Layer 3 queues.
-
When the interface sends the packets on the transmit ring and empties the ring, it once again has sufficient buffers available
to store the packets. It releases the back pressure, and the Layer 3 processor dequeues new packets to the interface.
The most important aspect of this communication system is that the interface recognizes that its transmit ring is full and
throttles the receipt of new packets from the Layer 3 processor system. Thus, when the interface is congested, the drop decision
is moved from a random, last-in, first-dropped decision in the first in, first out (FIFO) queue of the transmit ring to a
differentiated decision based on IP-level service policies implemented by the Layer 3 processor.
Service policies apply only to packets stored in the Layer 3 queues. The table below illustrates which packets sit in the
Layer 3 queue. Locally generated packets are always process switched and are delivered first to the Layer 3 queue before being
passed on to the interface driver. Fast-switched and Cisco Express Forwarding-switched packets are delivered directly to the
transmit ring and sit in the L3 queue only when the transmit ring is full.
Table 1. Packet Types and the Layer 3 Queue
Packet Type
|
Congestion
|
Noncongestion
|
Locally generated packets, including Telnet packets and pings
|
Yes
|
Yes
|
Other packets that are process switched
|
Yes
|
Yes
|
Packets that are Cisco Express Forwarding- or fast-switched
|
Yes
|
No
|
The following example shows these guidelines applied to the
show
policy-map
interface command output.
Router# show policy-map interface atm 1/0.1
ATM1/0.1: VC 0/100 -
Service-policy output: cbwfq (1283)
Class-map: A (match-all) (1285/2)
28621 packets, 7098008 bytes
5 minute offered rate 10000 bps, drop rate 0 bps
Match: access-group 101 (1289)
Weighted Fair Queueing
Output Queue: Conversation 73
Bandwidth 500 (kbps) Max Threshold 64 (packets)
(pkts matched/bytes matched) 28621/7098008
(depth/total drops/no-buffer drops) 0/0/0
Class-map: B (match-all) (1301/4)
2058 packets, 148176 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: access-group 103 (1305)
Weighted Fair Queueing
Output Queue: Conversation 75
Bandwidth 50 (kbps) Max Threshold 64 (packets)
(pkts matched/bytes matched) 0/0
(depth/total drops/no-buffer drops) 0/0/0
Class-map: class-default (match-any) (1309/0)
19 packets, 968 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any (1313)
The table below defines counters that appear in the example.
Table 2. Packet Counters from show policy-map interface Output
Counter
|
Explanation
|
28621 packets, 7098008 bytes
|
The number of packets matching the criteria of the class. This counter increments whether or not the interface is congested.
|
(pkts matched/bytes matched) 28621/709800
|
The number of packets matching the criteria of the class when the interface was congested. In other words, the interface’s
transmit ring was full, and the driver and the L3 processor system worked together to queue the excess packets in the L3 queues,
where the service policy applies. Packets that are process switched always go through the L3 queuing system and therefore
increment the "packets matched" counter.
|
Class-map: B (match-all) (1301/4)
|
These numbers define an internal ID used with the CISCO-CLASS-BASED-QOS-MIB Management Information Base (MIB).
|
5 minute offered rate 0 bps, drop rate 0 bps
|
Use the
load-interval command to change this value and make it a more instantaneous value. The lowest value is 30 seconds; however, statistics
displayed in the
show
policy-map
interface command output are updated every 10 seconds. Because the command effectively provides a snapshot at a specific moment, the
statistics may not reflect a temporary change in queue size.
|
Without congestion, there is no need to queue any excess packets. When congestion occurs, packets, including Cisco Express
Forwarding- and fast-switched packets, might go into the Layer 3 queue. If you use congestion management features, packets
accumulating at an interface are queued until the interface is free to send them; they are then scheduled according to their
assigned priority and the queueing mechanism configured for the interface.
Normally, the packets counter is much larger than the packets matched counter. If the values of the two counters are nearly
equal, then the interface is receiving a large number of process-switched packets or is heavily congested. Both of these conditions
should be investigated to ensure optimal packet forwarding.
Routers allocate conversation numbers for the queues that are created when the service policy is applied. The following example
shows the queues and related information.
Router# show policy-map interface s1/0.1 dlci 100
Serial1/0.1: DLCI 100 -
output : mypolicy
Class voice
Weighted Fair Queueing
Strict Priority
Output Queue: Conversation 72
Bandwidth 16 (kbps) Packets Matched 0
(pkts discards/bytes discards) 0/0
Class immediate-data
Weighted Fair Queueing
Output Queue: Conversation 73
Bandwidth 60 (%) Packets Matched 0
(pkts discards/bytes discards/tail drops) 0/0/0
mean queue depth: 0
drops: class random tail min-th max-th mark-prob
0 0 0 64 128 1/10
1 0 0 71 128 1/10
2 0 0 78 128 1/10
3 0 0 85 128 1/10
4 0 0 92 128 1/10
5 0 0 99 128 1/10
6 0 0 106 128 1/10
7 0 0 113 128 1/10
rsvp 0 0 120 128 1/10
Class priority-data
Weighted Fair Queueing
Output Queue: Conversation 74
Bandwidth 40 (%) Packets Matched 0 Max Threshold 64 (packets)
(pkts discards/bytes discards/tail drops) 0/0/0
Class class-default
Weighted Fair Queueing
Flow Based Fair Queueing
Maximum Number of Hashed Queues 64 Max Threshold 20 (packets)
Information reported for each class includes the following:
-
Class definition
-
Queueing method applied
-
Output Queue Conversation number
-
Bandwidth used
-
Number of packets discarded
-
Number of bytes discarded
-
Number of packets dropped
The
class-default class is the default class to which traffic is directed, if that traffic does not satisfy the match criteria of other classes
whose policy is defined in the policy map. The
fair-queue command allows you to specify the number of dynamic queues into which IP flows are sorted and classified. Alternately, routers
allocate a default number of queues derived from the bandwidth on the interface or VC. Supported values in either case are
a power of two, in a range from 16 to 4096.
The table below lists the default values for interfaces and for ATM permanent virtual circuits (PVCs).
Table 3. Default Number of Dynamic Queues as a Function of Interface Bandwidth
Bandwidth Range
|
Number of Dynamic Queues
|
Less than or equal to 64 kbps
|
16
|
More than 64 kbps and less than or equal to 128 kbps
|
32
|
More than 128 kbps and less than or equal to 256 kbps
|
64
|
More than 256 kbps and less than or equal to 512 kbps
|
128
|
More than 512 kbps
|
256
|
The table below lists the default number of dynamic queues in relation to ATM PVC bandwidth.
Table 4. Default Number of Dynamic Queues as a Function of ATM PVC Bandwidth
Bandwidth Range
|
Number of Dynamic Queues
|
Less than or equal to 128 kbps
|
16
|
More than 128 kbps and less than or equal to 512 kbps
|
32
|
More than 512 kbps and less than or equal to 2000 kbps
|
64
|
More than 2000 kbps and less than or equal to 8000 kbps
|
128
|
More than 8000 kbps
|
256
|
Based on the number of reserved queues for WFQ, Cisco software assigns a conversation or queue number as shown in the table
below.
Table 5. Conversation Numbers Assigned to Queues
Number
|
Type of Traffic
|
1 to 256
|
General flow-based traffic queues. Traffic that does not match to a user-created class will match to class-default and one
of the flow-based queues.
|
257 to 263
|
Reserved for Cisco Discovery Protocol and for packets marked with an internal high-priority flag.
|
264
|
Reserved queue for the priority class (classes configured with the priority command). Look for the "Strict Priority" value
for the class in the
show
policy-map interface output. The priority queue uses a conversation ID equal to the number of dynamic queues, plus 8.
|
265 and higher
|
Queues for user-created classes.
|