- Preface
- Cisco ONS Documentation Roadmap for Release 9.2.1
- Chapter 1, CE-Series Ethernet Cards
- Chapter 2, E-Series and G-Series Ethernet Cards
-
- Chapter 3, ML-Series Cards Overview
- Chapter 4, CTC Operations
- Chapter 5, Initial Configuration
- Chapter 6, Configuring Interfaces
- Chapter 7, Configuring CDP
- Chapter 8, Configuring POS
- Chapter 9, Configuring Bridges
- Chapter 10, Configuring IEEE 802.1Q Tunneling and Layer 2 Protocol Tunneling
- Chapter 11, Configuring STP and RSTP
- Chapter 12, Configuring Link Aggregation
- Chapter 13, Configuring Security for the ML-Series Card
- Chapter 14, Configuring RMON
- Chapter 15, Configuring SNMP
- Chapter 16, Configuring VLAN
- Chapter 17, Configuring Networking Protocols
- Chapter 18, Configuring IRB
- Chapter 19, Configuring IEEE 802.17b Resilient Packet Ring
- Chapter 20, Configuring VRF Lite
- Chapter 21, Configuring Quality of Service
- Chapter 22, Configuring Ethernet over MPLS
- Chapter 23, Configuring the Switching Database Manager
- Chapter 24, Configuring Access Control Lists
- Chapter 25, Configuring Cisco Proprietary Resilient Packet Ring
-
- Chapter 26, ML-MR-10 Card Overview
- Chapter 27, IP Host Functionality on the ML-MR-10 Card
- Chapter 29: Configuring Security for the ML-MR-10 Card
- Chapter 30: Configuring IEEE 802.17b Resilient Packet Ring on the ML-MR-10 Card
- Chapter 31, Configuring POS on the ML-MR-10 Card
- Chapter 32, Configuring Card Port Protection on the ML-MR-10 Card
- Chapter 32, Configuring Ethernet Virtual Circuits and QoS on the ML-MR-10 Card
- Chapter 34: Configuring Link Agrregation on ML-MR-10 card
- Chapter 35, Configuring Ethernet OAM (IEEE 802.3ah), CFM (IEEE 802.1ag), and E-LMI on the ML-MR-10 Card
- Appendix A: CPU and Memory Utilization on the ML-MR-10 Card
- Appendix A, POS on ONS Ethernet Cards
- Appendix B, Command Reference
- Appendix C, Unsupported CLI Commands
- Appendix D, Using Technical Support
- Understanding Link Aggregation
- Understanding Encapsulation over EtherChannel or POS Channel
- Monitoring and Verifying EtherChannel and POS
- Understanding Link Aggregation Control Protocol
Configuring Link Aggregation
This chapter applies to the ML-Series (ML100T-2, ML100X-8, and ML1000-2) cards and describes how to configure link aggregation for the ML-Series cards, both EtherChannel and packet-over-SONET/SDH (POS) channel. For additional information about the Cisco IOS commands used in this chapter, refer to the Cisco IOS Command Reference publication.
Understanding Link Aggregation
The ML-Series card offers both EtherChannel and POS channel. Traditionally EtherChannel is a trunking technology that groups together multiple full-duplex IEEE 802.3 Ethernet interfaces to provide fault-tolerant high-speed links between switches, routers, and servers. EtherChannel forms a single higher bandwidth routing or bridging endpoint and was designed primarily for host-to-switch connectivity. The ML-Series card extends this link aggregation technology to bridged POS interfaces. POS channel is only supported with LEX encapsulation.
Link aggregation provides the following benefits:
Port channel is a term for both POS channel and EtherChannel. The port channel interface is treated as a single logical interface although it consists of multiple interfaces. Each port channel interfaces consists of one type of interface, either Fast Ethernet, Gigabit Ethernet, or POS. You must perform all port channel configurations on the port channel (EtherChannel or POS channel) interface rather than on the individual member Ethernet or POS interfaces. You can create the port channel interface by entering the interface port-channel interface configuration command.
Note You must perform all Cisco IOS configurations—such as bridging, routing, or parameter changes such as an MTU change—on the port channel (EtherChannel or POS channel) interface rather than on individual member Ethernet or POS interfaces.
Port channel connections are fully compatible with IEEE 802.1Q trunking and routing technologies. IEEE 802.1Q trunking can carry multiple VLANs across a port channel.
Each ML100T-12, ML100X-8, or ML1000-2 card supports one POS channel, a port channel made up of the two POS ports. A POS channel combines the two POS port capacities into a maximum aggregate capacity of STS-48c or VC4-16c.
Each ML100T-12 supports up to six FECs and one POS channel. Each ML100X-8 supports up to four FECs and one POS channel. A maximum of four Fast Ethernet ports can bundle into one Fast Ethernet Channel (FEC) and provide bandwidth scalability up to 400-Mbps full-duplex Fast Ethernet.
Each ML1000-2 supports up to two port channels, including the POS channel. A maximum of two Gigabit Ethernet ports can bundle into one Gigabit Ethernet Channel (FEC) and provide 2-Gbps full-duplex aggregate capacity on the ML1000-2.
Each ML-MR-10 card supports up to ten port channel interfaces. A maximum of ten Gigabit Ethernet ports can be added into one Port-Channel.
Note If the number of POS ports configured on the ML-MR-10 are 26, the MLMR-10 card supports two port channel interfaces. However, a maximum of ten Gigabit Ethernet ports can be added into one port channel.
Note Link aggregation across multiple ML-Series cards is not supported.
Note Policing is not supported on port channel interfaces.
Note The ML-Series does not support the routing of Subnetwork Access Protocol (SNAP) or Inter-Switch Link (ISL) encapsulated frames.
Configuring EtherChannel
You can configure an FEC or a GEC by creating an EtherChannel interface (port channel) and assigning a network IP address. All interfaces that are members of a FEC or a GEC should have the same link parameters, such as duplex and speed.
To create an EtherChannel interface, perform the following procedure, beginning in global configuration mode:
For information on other configuration tasks for the EtherChannel, refer to the
Cisco IOS Configuration Fundamentals Configuration Guide
.
To assign Ethernet interfaces to the EtherChannel, perform the following procedure, beginning in global configuration mode:
EtherChannel Configuration Example
Figure 12-1 shows an example of EtherChannel. The associated commands are provided in Example 12-1 (Switch A) and Example 12-2 (Switch B).
Figure 12-1 EtherChannel Example
Configuring POS Channel
You can configure a POS channel by creating a POS channel interface (port channel) and optionally assigning an IP address. All POS interfaces that are members of a POS channel should have the same port properties and be on the same ML-Series card.
Note POS channel is only supported with LEX encapsulation.
To create a POS channel interface, perform the following procedure, beginning in global configuration mode:
To assign POS interfaces to the POS channel, perform the following procedure, beginning in global configuration mode:
POS Channel Configuration Example
Figure 12-2 shows an example of POS channel configuration. The associated code is provided in Example 12-3 (Switch A) and Example 12-4 (Switch B).
Figure 12-2 POS Channel Example
Understanding Encapsulation over EtherChannel or POS Channel
When configuring encapsulation over FEC, GEC, or POS, be sure to configure IEEE 802.1Q on the port-channel interface, not its member ports. However, certain attributes of port channel, such as duplex mode, need to be configured at the member port levels. Also make sure that you do not apply protocol-level configuration (such as an IP address or a bridge group assignment) to the member interfaces. All protocol-level configuration should be on the port channel or on its subinterface. You must configure IEEE 802.1Q encapsulation on the partner system of the EtherChannel as well.
Configuring Encapsulation over EtherChannel or POS Channel
To configure encapsulation over the EtherChannel or POS channel, perform the following procedure, beginning in global configuration mode:
Encapsulation over EtherChannel Example
Figure 12-3 shows an example of encapsulation over EtherChannel. The associated code is provided in Example 12-5 (Switch A) and Example 12-6 (Switch B).
Figure 12-3 Encapsulation over EtherChannel Example
This encapsulation over EtherChannel example shows how to set up two ONS 15454s with ML100T-12 cards (Switch A and Switch B) to interoperate with two switches that also support IEEE 802.1Q encapsulation over EtherChannel. To set up this example, use the configurations in the following sections for both Switch A and Switch B.
Monitoring and Verifying EtherChannel and POS
After FEC, GEC, or POS is configured, you can monitor its status using the show interfaces port-channel command.
Understanding Link Aggregation Control Protocol
In Software Release 8.0.0 and later, ML100T-12, ML1000-2, ML100T-8, and CE-100T-8 cards can utilize the link aggregation control protocol (LACP) to govern reciprocal peer packet transmission with respect to LACP’s detection of flawed packets. The cards’ ports transport a signal transparently (that is, without intervention or termination). However, this transparent packet handling is done only if the LACP is not configured for the ML- Series card.
Passive Mode and Active Mode
Passive or active modes are configured for a port and they differ in how they direct a card to transmit packets: In passive mode, the LACP resident on the node transmits packets only after it receives reciprocal valid packets from the peer node. In active mode, a node transmits packets irrespective of the LACP capability of its peer.
LACP Functions
LACP performs the following functions in the system:
- Maintains configuration information in order to control aggregation
- Exchanges configuration information with other peer devices
- Attaches or detaches ports from the link aggregation group based on the exchanged configuration information
- Enables data flow when both sides of the aggregation group are synchronized
LACP Parameters
LACP utilizes the following parameters to control aggregation:
System Identifier—A unique identification assigned to each system. It is the concatenation of the system priority and a globally administered individual MAC address.
Port Identification—A unique identifier for each physical port in the system. It is the concatenation of the port priority and the port number.
Port Capability Identification—An integer, called a key, that identifies one port’s capability to aggregate with another port. There are two types of key: administrative and operational. An administrative key is configured by the network administrator, and an operational key is assigned by LACP to a port based on its aggregation capability.
Aggregation Identifier—A unique integer that is assigned to each aggregator and is used for identification within the system.
LACP Usage Scenarios
In Software Release 8.0.0 and later, LACP functions on ML-Series cards in termination mode and on the CE-Series cards in transparent mode.
Termination Mode
In termination mode, the link aggregation bundle terminates or originates at the ML-Series card. To operate in this mode, LACP should be configured on the Ethernet interface. One protect SONET or SDH circuit can carry the aggregated Ethernet traffic of the bundle. The advantage of termination mode over transparent mode is that the network bandwidth is not wasted. However. the disadvantage is that there is no card protection between the CPE and UNI (ONS 15454) because all the links in the ML card bundle belong to the same card.
Figure 12-4 LACP Termination Mode Example
Transparent Mode
In Figure 12-5, the link aggregation bundle originates at router 1 and terminates at router 2. Transparent mode is enabled when the LACP packets are transmitted without any processing on a card. While functioning in this mode, the CE-100T-8 cards pass through LACP packets transparently so that the two CPE devices perform the link aggregation. To operate in this mode, no LACP configuration is required on the CE-100T-8 cards.
Figure 12-5 LACP Transparent Mode Example
Configuring LACP
To configure LACP over the EtherChannel, perform the following procedure, beginning in global configuration mode:
In Example 12-8, the topology includes two nodes with a GEC or FEC transport between them. This example shows one GEC interface on Node 1. (Up to four similar types of links per bundle are supported.)
Load Balancing on the ML-Series cards
The load balancing for the Ethernet traffic on the portchannel is performed while sending the frame through a port channel interface based on the source MAC and destination MAC address of the Ethernet frame.
On a 2 port channel interface, the Unicast Ethernet traffic (Learned MAC with unicast SA and DA) is transmitted on either first or second member of the port-channel based on the result of the “Exclusive OR” (XOR) operation applied on the second least significant bits (bit 1) of DA-MAC and SA-MAC. So, if the “XOR” result of the Ethernet frames SA-MAC second least significant bit and DA-MAC second least significant bit is 0 then the frame is sent on the first member and if the result is 1 then the frame is transmitted on the second member port of the port channel.
The Flood Ethernet traffic (Unknown MAC, Multicast and Broadcast frames) is transmitted on the first active member of the port-channel.
The routed IP Unicast traffic from the ML-Series towards the port channel ports is transmitted on either interface based on the result of the “Exclusive OR” (XOR) operation applied on the second least significant bits of the source and destination IP address of the IP packet. So if the “XOR” result of the IP packets Source Address least significant bit and Destination Address least significant bit is 0 then the frame is on the first member port and if the result is 1 then the frame is transmitted on the second member port.
On the 4 port EtherChannel the second and third least significant bits are used for load balancing.
The routed IP Multicast traffic from the ML-Series towards the RPR ring is transmitted on the first active member of the port channel.
Load Balancing on the ML-MR-10 card
The load balancing on the ML-MR-10 card can be configured through the following options:
Note The default load balancing mechanism on ML-MR-10 card is the source and destination MAC address.
MAC address based load balancing
The MAC address based load balancing is achieved by performing “XOR” (exclusive OR) operation on the last 4 least significant bits of the source MAC address and the destination MAC address.
Table 12-5 displays the ethernet traffic with 4 Gigabit Ethernet members on the port channel interfaces.
Table 12-6 displays the ethernet traffic with 3 Gigabit Ethernet members on the port channel interfaces.
Note The member of the port channel interface depends on the order in which the Gigabit Ethernet becomes an active member of the port channel interface. The order in which the members are added to the port channel can be found using the show interface port channel <port channel number> command in the EXEC mode.
VLAN Based Load Balancing
VLAN based load balancing is achieved by using the last 4 least significant bits of the incoming VLAN ID in the outer VLAN.
Table 12-7 displays the ethernet traffic with 3 Gigabit Ethernet members on the port channel interfaces.
Note The member of the port channel interface depends on the order in which the Gigabit Ethernet becomes an active member of the port channel interface. The order in which the members are added to the port channel can be found using the show interface port-channel <port-channel number> command in the EXEC mode.
With the 4 Gigabit Ethernet members, if the incoming VLAN ID is 20, the traffic will be sent on member-0. If the incoming VLAN ID is 30, the traffic will be sent on member-2.
Load Balancing Configuration Commands
Table 12-8 details the commands used to configure load balancing on the ML-Series cards and the ML-MR-10 card.