Cisco IOS XE Scaling Limits for MLP Bundles
This section lists the scaling limits for MLP bundles in different releases of Cisco IOS XE, in which scaling limits were either introduced or enhanced.
Release 2.2.(O)S
In Cisco IOS XE Release 2.2.(O)S, the MLP feature was introduced on the Cisco ASR 1000 Series Aggregation Services Routers. MLPoSerial was the first supported transport. In this release, MLP bundles can consist of up to 10 serial links. The bandwidth of each link interface does not have to be the same as the other links in the bundle. The Cisco ASR 1000 Series Aggregation Services Routers support links of types T1, E1, and NxDS0. MLP LFI is fully supported with MLPoSerial in this release.
Release 3.4.(O)S
In Cisco IOS XE Release 3.4.(O)S, the MLP feature was enhanced to enable the Cisco ASR 1000 Series Aggregation Services Routers to act as LAC, LNS, or PTA devices. Support for tunneling bundles between the LAC device and the LNS device was added. In this release, transport between the LAC device and the LNS device is Layer 2 Tunnel Protocol (L2TP). The L2TP tunnels can operate on either 1-Gbps or 10-Gbps interfaces. When ASR 1000 Series Aggregation Services Router acts as an LNS device, it terminates the MLP bundles coming through the L2TP tunnel from the LAC. In this release, support was added for MLP upstream fragment reassembly, but not for MLP downstream fragmentation.
Release 3.7.1S
In Cisco IOS XE Release 3.7.1S, the existing support for the MLP feature in a broadband topology was enhanced. The scaling limits were increased for the Ethernet transports, and downstream fragmentation support was added for the broadband topologies.
In this release, when a Cisco ASR 1000 Series Aggregation Services Router acts as an LNS device, it terminates the MLP bundles coming through the L2TP tunnel from the LAC. The scaling targets mentioned for MLP over broadband are based on RP2/ESP40 and 2RU-VE hardware configurations. The scaling capabilities are less for RP1 and ESP5, ESP10, or ESP20.
The implementation of MLP on a Cisco ASR 1000 Series Aggregation Services Router does not support all the Cisco IOS XE interoperability features.
Release 3.12.(O)S
In Cisco IOS XE Release 3.12.(O)S, the multi-member-link MLPoA or MLPoEoA, including Downstream , is introduced. The scaling limits are increased for the member links in MLPoA or MLPoEoA scenarios.
Table 1 shows the maximum scale numbers for various MLP functionalities on the Cisco ASR 1000 Series Aggregation Services Routers.
Transport |
Maximum Number of Members per Bundle |
Maximum Number of Bundles per System |
Maximum Number of Member Links per System |
Downstream LFI |
Upstream Fragment Reassembly |
Cisco IOS XE Release |
---|---|---|---|---|---|---|
MLPoSerial |
10 |
1232 |
1232 |
Yes |
Yes |
2.2.0S |
MLPoA AAL5MUX |
1 |
1000 |
1000 |
No |
Yes |
3.4.0S |
MLPoA AAL5MUX |
8 |
4000 |
4000 |
Yes |
Yes |
3.12.0S |
MLPoA AAL5SNAP |
1 |
1000 |
1000 |
No |
Yes |
3.4.0S |
MLPoA AAL5SNAP |
8 |
4000 |
4000 |
Yes |
Yes |
3.12.0S |
MLPoE |
1 |
4000 |
4000 |
No |
Yes |
3.4.0S |
MLPoE |
8 |
4000 |
4000 |
Yes |
Yes |
3.7.1S |
MLPoEoA AAL5SNAP |
1 |
1000 |
1000 |
No |
Yes |
3.4.0S |
MLPoEoA AAL5SNAP |
8 |
4000 |
4000 |
Yes |
Yes |
3.12.0S |
MLPoEoQinQ |
1 |
4000 |
4000 |
No |
Yes |
3.4.0S |
MLPoEoQinQ |
8 |
4000 |
4000 |
Yes |
Yes |
3.7.1S |
MLPoEoVLAN |
1 |
4000 |
4000 |
No |
Yes |
3.4.0S |
MLPoEoVLAN |
8 |
4000 |
4000 |
Yes |
Yes |
3.7.1S |
MLPoLNS |
1 |
4000 |
4000 |
No |
Yes |
3.4.0S |
MLPoLNS |
8 |
4000 |
4000 |
Yes |
Yes |
3.7.1S |
Restrictions for MLP over Serial Interfaces
The following restrictions apply to MLP over Serial Interfaces:
-
The MLP over Serial Interfaces feature supports a maximum of ten member links per bundle. The member links can be any combination of T1/E1 or fractional T1s/E1s (for example, NxDS0). Member-link interfaces no faster than E1 speeds (DS0, T1, and E1) are only supported in the MLP over Serial Interfaces feature. For better MLP performance, all the member links in a bundle must be of the same bandwidth.
-
Member links in a bundle cannot be of different encapsulation types.
-
You cannot manually configure the bandwidth of an MLP bundle by using the bandwidth command on the multilink interface. The bandwidth of an MLP bundle is managed based on the aggregate bandwidth of all the active member links on the bundle. As the links are dynamically added or removed from an MLP bundle, the bandwidth is updated to reflect the aggregate of the active links. The bandwidth can be rate limited by applying an hierarchical QoS policy on the multilink interface and applying a shaper to the parent class-default class.
-
MLP over Frame Relay is not supported; only MLP over Serial PPP link is supported. Customers who require multilink support in a frame relay environment can use the Multilink Frame Relay (MLFR-FRF.16) feature.
-
The legacy IOS compression feature compress [mppc | stac | predictor] is not supported.
-
LFI is supported on MLP bundles with any number of links in the bundle. However, when using a bundle with more than one member link, the order of the priority packets (PPP encapsulated) is not guaranteed. Priority-packet distribution is handled in a manner similar to IP per-packet load sharing. MLP guarantees nonpriority packet ordering that manages reordering at the peer device, based on the MLP packet sequence number.
-
Order issues with the LFI multiple-member link in case of priority traffic can be addressed in some platforms using Multiclass Multilink Protocol (MCMP-RFC 2686), which is an extension of the MLP. The Cisco ASR 1000 Series Aggregation Services Routers do not support MCMP.
-
Only the MLP long-sequence number format is supported for the packet header format option.
-
PPPoE is not supported on SVI interface for Cisco 1000 Series Integrated Services Routers and Cisco 4000 Series Integrated Services Routers.
Restrictions for MLP over Ethernet at PTA and LAC
The following restrictions apply to MLP over Ethernet at PTA and LAC:
- MLPoE using EtherChannel is not supported.
- For MLP virtual access bundles, the default Layer 3 (that is IP and IPv6) maximum transmission unit (MTU) value is 1500. For more information about MTU, see the MTU section.
- For MLPoE PTA variations (MLPoE, MLPoVLAN, and MLPoQinQ), the default bandwidth of the member-link session is 1 Gbps instead of the data rate communicated by the DSLAM to the PTA router. If a bandwidth statement is added to the virtual template, the bandwidth is applied to the bundle instead of the member link. This is not the desired behavior. (To define the data rate of an MLPoE PTA-type bundle, apply a QoS policy on the bundle session that includes a parent shaper on the class-default class with an explicit data rate defined. Do not use the shape percent command in this parent shaper because the shape percent command uses the default data rate of 1 Gbps as the base rate for percent calculation. However, the percent-based rates can be defined in the child (nested) policy, if an hierarchical policy is being defined.
- If the DSLAM between the CPE and PTA communicates the link rate through the PPPoE dsl-sync-rate tags (Actual Data-Rate Downstream [0x82/130d] tag), the PTA device passes this data to the RADIUS server, but the Cisco ASR 1000 Series Aggregation Services Routers do not act upon it. The data rate of the session remains as described in the previous list item.
Restrictions for MLP over ATM at PTA and LAC
The following restrictions apply to MLP over ATM at PTA and LAC:
- ATM Autosense is supported to allow the dynamic selection of MLPoA or MLPoEoA.
- For ATM, the link-level bandwidth is a part of the ATM Permanent Virtual Circuits (PVC) configuration based on the unspecified bit rate (UBR) or variable bit rate (VBR) configurations. The bundle bandwidth is the aggregate of the member-link session bandwidth.
Note |
The MLP over Ethernet over ATM at PTA and LAC has the same restrictions as the MLP over ATM at PTA and LAC. |
Restrictions for MLP at LAC
In case of MLP over LNS (Ethernet) LAC switching, the MLP member-link session and the packet payload is transparent at the LAC device because it does not terminate the MLP session or the bundle interface. Hence, the LAC device does not bind the number of member-link sessions associated with a bundle. Similarly, the LFI functionality is transparent at the LAC device because the traffic is switched or passed through traffic.
Restrictions for MLP over LNS
The following restrictions apply to MLP over LNS:
- MLPoLNS bundles are supported with only Ethernet as the trunk between the LAC and LNS.
- Layer 2 Tunnel Protocol (L2TP) over IPsec is not supported.
- QoS (other than downstream Model-F shaping) on interfaces and tunnels towards the customer premise equipment (CPE) is not supported.
- When the CPE client initiates the PPP LCP connection, the multilink negotiation included as part of the LCP negotiation may fail if the LAC has not yet established connection with the LNS (which is typically the case). The LNS renegotiates the Multilink LCP options with the CPE client when the LAC initiates the connection to the LNS. (To allow this renegotiation of LCP options, the lcp renegotiation always command must be configured in the VPDN group at the LNS).
- Although per-packet load balancing is not supported, the configuration is not blocked and the functionality is operational (but not tested). Per-packet load balancing cannot be used with MLPoLNS because MLPoLNS requires a single-path per-destination IP address.
- Unlike the MLP over Serial mode or the MLP PTA mode, packets may traverse several network hops between the CPE and LNS devices in an MLPoLNS network. As a result of this multihop topology, even on a single-link bundle, MLP encapsulated packets may arrive at the receiver in an out-of-order state. Hence, the MLPoLNS receiver operates in a loose, lost-fragment detection mode. In this mode, if an MLP fragment is lost, the received MLP waits for a short time to receive the lost fragment. In addition, the MLP receiver limits the amount of out-of-order MLP data received before the fragment is declared lost. In Cisco IOS XE software, the default timeout value is 1 second. This may create problems in an environment with high packet loss and scaled MLP configurations because it requires the receiver to potentially buffer large amounts of data for each MLP bundle. Since the buffer space that is available is a finite resource, worst-case depletion of buffers can bleed over and begin affecting packet buffering on other MLP bundles. (The MLP lost-fragment timeout can be configured on the multilink virtual template interface using the ppp timeout multilink lost-fragment (seconds ) (milliseconds ) configuration command).
By default, in MLPoLNS, the Cisco IOS XE software informs the MLP that packets may arrive out of order. This works well for upstream traffic, but does not address the order issue at the peer CPE device. The peer CPE device should also be configured to allow for receipt of out-of-order packets. In Cisco devices, this can be managed by configuring the ppp link reorders command at the bundle interface.
- When the Cisco ASR 1000 Series Aggregation Services Routers function as both a PTA device and an LNS device simultaneously, locally terminated member links (PTA) and member links that are forwarded from the LAC are not supported within the same bundle.
Restrictions for Broadband MLP at PTA and LNS
The following restrictions apply to all variations of broadband MLP at PTA and LNS modes:
- When defining an MLP bundle with multiple member-link sessions, we recommend that all the member-link sessions utilize the same physical interface or subinterface. If other broadband sessions are sharing the same interface, ensure that all the member-link sessions utilize the same physical interface or subinterface.
-
The following issues might occur because of splitting links across separate physical interfaces or subinterfaces:
- MLP is a sequenced protocol and all the packets and fragments must be reordered and reassembled at the receiver, based on the MLP sequence number before the receiver forwards them. In such a scenario, packets traversing separate physical interfaces may cause additional packet latency disparity between links due to transmission delays and other issues associated with using multiple physical paths. The reordering and reassembly processing may require additional MLP buffering at the receiver.
- MLP on the Cisco ASR 1000 Series Aggregation Services Routers performs congestion management of the MLP bundle based on the congestion state of the member-link sessions that make up the bundle. If member-links are distributed across multiple interfaces and sufficient congestion is detected in one or more member links, the bundle may be back pressured due to the congestion even if all the links in the bundle are not congested. By keeping all the links on the same physical interface or subinterface, the chance of back pressure due to one link being congested is reduced.