Cisco Nexus 9000 ACI-Mode Switches Release Notes, Release 14.2(1)
The Cisco NX-OS software for the Cisco Nexus 9000 series switches is a data center, purpose-built operating system designed with performance, resiliency, scalability, manageability, and programmability at its foundation. It provides a robust and comprehensive feature set that meets the requirements of virtualization and automation in data centers.
This release works only on Cisco Nexus 9000 Series switches in ACI Mode.
This document describes the features, issues, and limitations for the Cisco NX-OS software. Use this document in combination with the Cisco Application Policy Infrastructure Controller Release Notes, Release 4.2(1).
Additional product documentation is listed in the "Related Documentation" section.
Release notes are sometimes updated with new information about restrictions and issues. See the following website for the most recent version of the Cisco Nexus 9000 ACI-Mode Switches Release Notes:
Table 1 shows the online change history for this document.
Table 1. Online History Change
Date |
Description |
July 19, 2022 |
In the Open Issues section, added bugs CSCwb17229 and CSCwb39899. |
May 16, 2022 |
In the Open Issues section, added bug CSCwa47686. |
August 10, 2021 |
In the Open Issues section, added bug CSCvy30381. |
July 29, 2021 |
In the Modular Spine Switch Fabric Modules table, for N9K-C9504-FM, N9K-C9508-FM, and N9K-C9516-FM, changed the maximum to 6. |
July 6, 2021 |
In the Supported Hardware section, added the NXA-PAC-500W-PI and NXA-PAC-500W-PE PSUs. |
June 24, 2021 |
In the Open Issues section, added bug CSCvu07844. |
June 15, 2021 |
In the Open Issues section, added bug CSCvy43640. |
May 17, 2021 |
In the Open Issues section, added bug CSCvq57414. |
January 22, 2021 |
In the Open Issues section, added bug CSCvt73069. |
January 19, 2021 |
In the Known Behaviors section, changed the following sentence: The Cisco Nexus 9508 ACI-mode switch supports warm (stateless) standby where the state is not synched between the active and the standby supervisor modules. To: The modular chassis Cisco ACI spine nodes, such as the Cisco Nexus 9508, support warm (stateless) standby where the state is not synched between the active and the standby supervisor modules. |
March 13, 2020 |
14.2(1i): In the Resolved Issues section, added bug CSCvr98827. Added known behavior CSCvq56811. |
December 5, 2019 |
14.2(1i): In the Open Issues section, added bug CSCvr76947. |
October 29, 2019 |
14.2(1l): Release 14.2(1l) became available. Added the open and resolved issues for this release. |
October 4, 2019 |
14.2(1i): In the Resolved Issues section, added bug CSCvn71475. |
September 27, 2019 |
In the Supported Hardware section, for the N9K-C9336C-FX2 switch, changed the port profile note to: The port profile feature supports downlink conversion of ports 31 through 34. Ports 35 and 36 can only be used as uplinks. |
September 20, 2019 |
14.2(1j): Release 14.2(1j) became available. Added the resolved issues for this release. |
September 20, 2019 |
In the Usage Guidelines section, added the following bullet: ■ A 25G link that is using the IEEE-RS-FEC mode can communicate with a link that is using the CL16-RS-FEC mode. There will not be a FEC mismatch and the link will not be impacted. |
September 17, 2019 |
14.2(1i): In the Open Issues section, added bug CSCvr31410. |
September 11, 2019 |
In the Supported Hardware section, for the N9K-C9348GC-FXP, N9K-C93108TC-FX, N9K-C93108TC-FX-24, N9K-C93180YC-FX, and N9K-C93180YC-FX-24 switches, added the following note: Note: Incoming FCOE packets are redirected by the supervisor module. The data plane-forwarded packets are dropped and are counted as forward drops instead of as supervisor module drops. |
September 8, 2019 |
14.2(1i): Release 14.2(1i) became available. |
This document includes the following sections:
■ Contents
■ Bugs
The following sections list the supported hardware.
Table 2 Modular Spine Switches
Product ID |
Description |
N9K-C9504 |
Cisco Nexus 9504 switch chassis |
N9K-C9508 |
Cisco Nexus 9508 switch chassis |
N9K-C9508-B1 |
Cisco Nexus 9508 chassis bundle with 1 supervisor module, 3 power supplies, 2 system controllers, 3 fan trays, and 3 fabric modules |
N9K-C9508-B2 |
Cisco Nexus 9508 chassis bundle with 1 supervisor module, 3 power supplies, 2 system controllers, 3 fan trays, and 6 fabric modules |
N9K-C9516 |
Cisco Nexus 9516 switch chassis |
Table 3 Modular Spine Switch Line Cards
Product ID |
Description |
Maximum Quantity |
||
Cisco Nexus 9504 |
Cisco Nexus 9508 |
Cisco Nexus 9516 |
||
N9K-X9736C-FX |
Cisco Nexus 9500 36-port 40/100 Gigabit Ethernet Cloud Scale line card |
4 |
8 |
16 |
N9K-X9736Q-FX |
Cisco Nexus 9500 36-port 40 Gigabit Ethernet Cloud Scale line card |
4 |
8 |
16 |
N9K-X9732C-EX |
Cisco Nexus 9500 32-port, 40/100 Gigabit Ethernet Cloud Scale line card Note: The N9K-X9732C-EX line card cannot be used when a fabric module is installed in FM slot 25. |
4 |
8 |
16 |
N9K-X9736PQ |
Cisco Nexus 9500 36-port 40 Gigabit Ethernet line card |
4 |
8 |
16 |
Table 4 Modular Spine Switch Fabric Modules
Product ID |
Description |
Minimum |
Maximum |
N9K-C9504-FM-E |
Cisco Nexus 9504 cloud scale fabric module |
4 |
5 |
N9K-C9508-FM-E |
Cisco Nexus 9508 cloud scale fabric module |
4 |
5 |
N9K-C9508-FM-E2 |
Cisco Nexus 9508 cloud scale fabric module |
4 |
5 |
N9K-C9516-FM-E2 |
Cisco Nexus 9516 cloud scale fabric module |
4 |
5 |
N9K-C9504-FM |
Cisco Nexus 9504 classic fabric module |
3 |
6 |
N9K-C9508-FM |
Cisco Nexus 9508 classic fabric module |
3 |
6 |
N9K-C9516-FM |
Cisco Nexus 9516 classic fabric module |
3 |
6 |
Table 5 Modular Spine Switch Supervisor and System Controller Modules
Product ID |
Description |
N9K-SUP-A+ |
Cisco Nexus 9500 Series supervisor module |
N9K-SUP-B+ |
Cisco Nexus 9500 Series supervisor module |
N9K-SUP-A |
Cisco Nexus 9500 Series supervisor module |
N9K-SUP-B |
Cisco Nexus 9500 Series supervisor module |
N9K-SC-A |
Cisco Nexus 9500 Series system controller |
Table 6 Fixed Spine Switches
Product ID |
Description |
N9K-C9332C |
Cisco Nexus 9300 platform switch with 32 40/100-Gigabit QSFP28 ports and 2 SFP ports. Ports 25-32 offer hardware support for MACsec encryption. |
N9K-C9336PQ |
Cisco Nexus 9336PQ switch, 36-port 40 Gigabit Ethernet QSFP |
N9K-C9364C |
Cisco Nexus 9364C switch is a 2-rack unit (RU), fixed-port switch designed for spine-leaf-APIC deployment in data centers. This switch supports 64 40/100-Gigabit QSFP28 ports and two 1/10-Gigabit SFP+ ports. The last 16 of the QSFP28 ports are colored green to indicate that they support wire-rate MACsec encryption. |
Table 7 Fixed Spine Switch Fans
Product ID |
Description |
N9K-C9300-FAN3 |
Burgundy port side intake fan |
N9K-C9300-FAN3-B |
Blue port side exhaust fan |
N9K-C9504-FAN |
Fan tray for Cisco Nexus 9504 chassis |
N9K-C9508-FAN |
Fan tray for Cisco Nexus 9508 chassis |
N9K-C9516-FAN |
Fan tray for Cisco Nexus 9516 chassis |
NXA-FAN-160CFM-PE |
Blue port side exhaust fan |
NXA-FAN-160CFM-PI |
Burgundy port side intake fan |
NXA-FAN-35CFM-PE |
Blue port side exhaust fan |
NXA-FAN-35CFM-PI |
Burgundy port side intake fan |
Table 8 Fixed Leaf Switches
Product ID |
Description |
N9K-C93240YC-FX2 |
Cisco Nexus 9300 platform switch with 48 1/10/25-Gigabit Ethernet SFP28 ports and 12 40/100-Gigabit Ethernet QSFP28 ports. The N9K-C93240YC-FX2 is a 1.2-RU switch. Note: 10/25G-LR-S with QSA is not supported. |
N9K-C93216TC-FX2 |
Cisco Nexus 9300 platform switch with 96 1/10GBASE-T (copper) front panel ports and 12 40 /100-Gigabit Ethernet QSFP28 spine-facing ports |
N9K-C93360YC-FX2 |
Cisco Nexus 9300 platform switch with 96 1/10/25-Gigabit front panel ports and 12 40 /100-Gigabit Ethernet QSFP spine-facing ports. Note: The supported total number of fabric ports and port profile converted fabric links is 64. |
N9K-C9336C-FX2 |
Cisco Nexus 9336C-FX2 Top-of-rack (ToR) switch with 36 fixed 40/100-Gigabit Ethernet QSFP28 spine-facing ports. Note: 1-Gigabit QSA is not supported on ports 1/1-6 and 1/33-36. The port profile feature supports downlink conversion of ports 31 through 34. Ports 35 and 36 can only be used as uplinks. |
N9K-C93108TC-FX |
Cisco Nexus 9300 platform switch with 48 1/10GBASE-T (copper) front panel ports and 6 fixed 40/100-Gigabit Ethernet QSFP28 spine-facing ports. Note: Incoming FCOE packets are redirected by the supervisor module. The data plane-forwarded packets are dropped and are counted as forward drops instead of as supervisor module drops. |
N9K-C93108TC-FX-24 |
Cisco Nexus 9300 platform switch with 24 1/10GBASE-T (copper) front panel ports and 6 fixed 40/100-Gigabit Ethernet QSFP28 spine-facing ports. Note: Incoming FCOE packets are redirected by the supervisor module. The data plane-forwarded packets are dropped and are counted as forward drops instead of as supervisor module drops. |
N9K-C93180YC-FX |
Cisco Nexus 9300 platform switch with 48 1/10/25-Gigabit Ethernet SFP28 front panel ports and 6 fixed 40/100-Gigabit Ethernet QSFP28 spine-facing ports. The SFP28 ports support 1-, 10-, and 25-Gigabit Ethernet connections and 8-, 16-, and 32-Gigabit Fibre Channel connections. Note: Incoming FCOE packets are redirected by the supervisor module. The data plane-forwarded packets are dropped and are counted as forward drops instead of as supervisor module drops. |
N9K-C93180YC-FX-24 |
Cisco Nexus 9300 platform switch with 24 1/10/25-Gigabit Ethernet SFP28 front panel ports and 6 fixed 40/100-Gigabit Ethernet QSFP28 spine-facing ports. The SFP28 ports support 1-, 10-, and 25-Gigabit Ethernet connections and 8-, 16-, and 32-Gigabit Fibre Channel connections. Note: Incoming FCOE packets are redirected by the supervisor module. The data plane-forwarded packets are dropped and are counted as forward drops instead of as supervisor module drops. |
N9K-C9348GC-FXP |
Cisco Nexus 9348GC-FXP switch with 48 100/1000-Megabit 1GBASE-T downlink ports, 4 10-/25-Gigabit SFP28 downlink ports, and 2 40-/100-Gigabit QSFP28 uplink ports. |
N9K-C93108TC-EX |
Cisco Nexus 9300 platform switch with 48 1/10GBASE-T (copper) front panel ports and 6 40/100-Gigabit QSFP28 spine facing ports. |
N9K-C93108TC-EX-24 |
Cisco Nexus 9300 platform switch with 24 1/10GBASE-T (copper) front panel ports and 6 40/100-Gigabit QSFP28 spine facing ports. |
N9K-C93180LC-EX |
Cisco Nexus 9300 platform switch with 24 40-Gigabit front panel ports and 6 40/100-Gigabit QSFP28 spine-facing ports. The switch can be used as either a 24 40G port switch or a 12 100G port switch. If 100G is connected the Port1, Port 2 will be HW disabled. |
N9K-C93180YC-EX |
Cisco Nexus 9300 platform switch with 48 1/10/25-Gigabit front panel ports and 6-port 40/100 Gigabit QSFP28 spine-facing ports. |
N9K-C93180YC-EX-24 |
Cisco Nexus 9300 platform switch with 24 1/10/25-Gigabit front panel ports and 6-port 40/100 Gigabit QSFP28 spine-facing ports. |
N9K-C9372PX-E |
Cisco Nexus 9372PX-E Top-of-rack (ToR) Layer 3 switch with 48 Port 1/10-Gigabit APIC-facing ports Ethernet SFP+ front panel ports and 6 40-Gbps Ethernet QSFP+ spine-facing ports Note: Only the downlink ports 1-16 and 33-48 are capable of supporting SFP1-10G-ZR SFP+. |
N9K-C9372TX-E |
Cisco Nexus 9372TX-E Top-of-rack (ToR) Layer 3 switch with 48 10GBASE-T (copper) front panel ports and 6 40-Gbps Ethernet QSFP+ spine-facing ports |
N9K-C93120TX |
Cisco Nexus 9300 platform switch with 96 1/10GBASE-T (copper) front panel ports and 6-port 40-Gigabit Ethernet QSFP spine-facing ports. |
N9K-C93128TX |
Cisco Nexus 9300 platform switch with 96 1/10GBASE-T (copper) front panel ports and 6 or 8 40-Gigabit Ethernet QSFP spine-facing ports. |
N9K-C9332PQ |
Cisco Nexus 9332PQ Top-of-rack (ToR) Layer 3 switch with 26 APIC-facing ports and 6 fixed-Gigabit spine facing ports. |
N9K-C9372PX |
Cisco Nexus 9372PX Top-of-rack (ToR) Layer 3 switch with 48 Port 1/10-Gigabit APIC-facing ports Ethernet SFP+ front panel ports and 6 40-Gbps Ethernet QSFP+ spine-facing ports Note: Only the downlink ports 1-16 and 33-48 are capable of supporting SFP1-10G-ZR SFP+. |
N9K-C9372TX |
Cisco Nexus 9372TX Top-of-rack (ToR) Layer 3 switch with 48 1/10GBASE-T (copper) front panel ports and 6 40-Gbps Ethernet QSFP spine-facing ports |
N9K-C9396PX |
Cisco Nexus 9300 platform switch with 48 1/10-Gigabit SFP+ front panel ports and 6 or 12 40-Gigabit Ethernet QSFP spine-facing ports |
N9K-C9396TX |
Cisco Nexus 9300 platform switch with 48 1/10GBASE-T (copper) front panel ports and 6 or 12 40-Gigabit Ethernet QSFP spine-facing ports |
Table 9 Expansion Modules
Product ID |
Description |
N9K-M12PQ |
12-port or 8-port Gigabit Ethernet expansion module |
N9K-M6PQ |
6-port Gigabit Ethernet expansion module |
N9K-M6PQ-E |
6-port, 40 Gigabit Ethernet expansion module |
Table 10 Fixed Leaf Switch Power Supply Units
Product ID |
Description |
N9K-PAC-1200W |
1200W AC Power supply, port side intake pluggable Note: This power supply is supported only by the Cisco Nexus 93120TX, 93128TX, and 9336PQ ACI-mode switches |
N9K-PAC-1200W-B |
1200W AC Power supply, port side exhaust pluggable Note: This power supply is supported only by the Cisco Nexus 93120TX, 93128TX, and 9336PQ ACI-mode switches |
N9k-PAC-3000W-B |
3000W AC power supply, port side intake |
NXA-PAC-1100W-PE2 |
1100W AC power supply, port side exhaust pluggable |
NXA-PAC-1100W-PI2 |
1100W AC power supply, port side intake pluggable |
N9K-PAC-650W |
650W AC Power supply, port side intake pluggable |
N9K-PAC-650W-B |
650W AC Power supply, port side exhaust pluggable |
NXA-PDC-1100W-PE |
1100W AC power supply, port side exhaust pluggable |
NXA-PDC-1100W-PI |
1100W AC power supply, port side intake pluggable |
NXA-PHV-1100W-PE |
1100W HVAC/HVDC power supply, port-side exhaust |
NXA-PHV-1100W-PI |
1100W HVAC/HVDC power supply, port-side intake |
NXA-PAC-1200W-PE |
1200W AC Power supply, port side exhaust pluggable, with higher fan speeds for NEBS compliance Note: This power supply is supported only by the Cisco Nexus 93120TX, 93128TX, and 9336PQ ACI-mode switches. |
NXA-PAC-1200W-PI |
1200W AC Power supply, port side intake pluggable, with higher fan speeds for NEBS compliance Note: This power supply is supported only by the Cisco Nexus 93120TX, 93128TX, and 9336PQ ACI-mode switches. |
NXA-PAC-750W-PE |
750W AC Power supply, port side exhaust pluggable, with higher fan speeds for NEBS compliance Note: This power supply is supported only on release 14.2(1) and later. |
NXA-PAC-750W-PI |
750W AC Power supply, port side intake pluggable, with higher fan speeds for NEBS compliance Note: This power supply is supported only on release 14.2(1) and later. |
NXA-PAC-500W-PE |
500W AC Power supply, port side exhaust pluggable |
NXA-PAC-500W-PI |
500W AC Power supply, port side intake pluggable |
NXA-PDC-440W-PI |
440W DC power supply, port side intake pluggable, with higher fan speeds for NEBS compliance Note: This power supply is supported only by the Cisco Nexus 9348GC-FXP ACI-mode switch. |
N9K-PUV-1200W
|
1200W HVAC/HVDC dual-direction airflow power supply Note: This power supply is supported only by the Cisco Nexus 93120TX, 93128TX, and 9336PQ ACI-mode switches |
N9K-PUV-3000W-B |
3000W AC Power supply, port side exhaust pluggable |
UCSC-PSU-930WDC V01 |
Port side exhaust DC power supply compatible with all ToR leaf switches |
UCS-PSU-6332-DC |
930W DC power supply, reversed airflow (port side exhaust) |
Table 11 Fixed Leaf Switch Fans
Product ID |
Description |
N9K-C9300-FAN2 |
Burgundy port side intake fan |
N9K-C9300-FAN2-B |
Blue port side exhaust fan |
N9K-C9300-FAN3 |
Burgundy port side intake fan |
N9K-C9300-FAN3-B |
Blue port side exhaust fan |
NXA-FAN-160CFM2-PE |
Blue port side exhaust fan |
NXA-FAN-160CFM2-PI |
Burgundy port side intake fan |
NXA-FAN-160CFM-PE |
Blue port side exhaust fan |
NXA-FAN-160CFM-PI |
Burgundy port side intake fan |
NXA-FAN-30CFM-B |
Burgundy port side intake fan |
NXA-FAN-30CFM-F |
Blue port side exhaust fan |
NXA-FAN-35CFM-PE |
Blue port side exhaust fan |
NXA-FAN-35CFM-PI |
Burgundy port side intake fan |
NXA-FAN-65CFM-PE |
Blue port side exhaust fan |
NXA-SFAN-65CFM-PE |
Blue port side exhaust fan |
NXA-FAN-65CFM-PI |
Burgundy port side intake fan |
NXA-SFAN-65CFM-PI |
Burgundy port side intake fan |
For tables of the FEX models that the Cisco Nexus 9000 Series ACI Mode switches support, see the following webpage:
For more information on the FEX models, see the Cisco Nexus 2000 Series Fabric Extenders Data Sheet at the following location:
This section lists the new and changed features in this release.
The following hardware features are now available:
■ The baremetal agent is now supported for the Cisco N9K-C93180YC-FX and N9K-C9348GC-FXP switches.
■ A limited license is now available that allows 48-port switches to be purchased with 24-port configurations. The limited license applies to the following switches:
— N9K-C93180YC-EX-24
— N9K-C93108TC-FX-24
— N9K-C93180YC-FX-24
— N9K-C93108TC-EX-24
■ The NXA-PAC-750W-PE and NXA-PAC-750W-PI power supplies are now available for the following switches:
— N9K-C93240YC-FX2
— N9K-C9332C
— N9K-C9336C-FX2
For new software features, see the Cisco Application Policy Infrastructure Controller Release Notes, Release 4.2(1).
For the changes in behavior, see the Cisco ACI Releases Changes in Behavior document.
The following procedure installs a Gigabit Ethernet module (GEM) in a top-of-rack switch:
1. Clear the switch’s current configuration by using the setup-clean-config command.
2. Power off the switch by disconnecting the power.
3. Replace the current GEM card with the new GEM card.
4. Power on the switch.
For other installation instructions, see the Cisco Application Centric Infrastructure Fabric Hardware Installation Guide.
■ For the supported optics per device, see the Cisco Optics-to-Device Compatibility Matrix.
■ Link level flow control is not supported on ACI-mode switches.
■ 100mb optics, such as the GLC-TE, are supported in 100mb speed only on -EX, -FX, -FX2, and -FX3 switches, such as the N9K-C93180YC-EX N9K-C93180YC-FX, and N9K-93180C-FX switches, and only on front panel ports 1/1-48. 100mb optics are not supported any other switches. 100mb optics cannot be used on EX or FX leaf switches on port profile converted downlink ports (1/49-52) using QSA.
■ This release supports the hardware and software listed on the ACI Ecosystem Compatibility List, and supports the Cisco AVS, Release 5.2(1)SV3(3.10).
■ To connect the N2348UPQ to ACI leaf switches, the following options are available:
— Directly connect the 40G FEX ports on the N2348UPQ to the 40G switch ports on the ACI leaf switches
— Break out the 40G FEX ports on the N2348UPQ to 4x10G ports and connect to the 10G ports on all other ACI leaf switches
Note: A fabric uplink port cannot be used as a FEX fabric port.
■ To connect the APIC (the controller cluster) to the ACI fabric, it is required to have a 10G interface on the ACI leaf. You cannot connect the APIC directly to the C9332PQ ACI leaf switch.
■ We do not qualify third party optics in Cisco ACI. When using third party optics, the behavior across releases is not guaranteed, meaning that the optics might not work in some NX-OS releases. Use third party optics at your own risk. We recommend that you use Cisco SFPs, which have been fully tested in each release to ensure consistent behavior.
■ On Cisco ACI platforms, 25G copper optics do not honor auto-negotiation, and therefore auto-negotiation on the peer device (ESX or standalone) must be disabled to bring up the links.
■ The following tables provide compatibility information for specific hardware:
Table 13 Modular Spine Switch Compatibility Information
Product ID |
Compatibility Information |
N9K-C9336PQ |
The Cisco N9K-C9336PQ switch is supported for multipod. The N9K-9336PQ switch is not supported for inter-site connectivity with Cisco ACI Multi-Site, but is supported for leaf switch-to-spine switch connectivity within a site. The N9K-9336PQ switch is not supported when multipod and Cisco ACI Multi-Site are deployed together. |
Table 14 Modular Spine Switch Line Card Compatibility Information
Product ID |
Compatibility Information |
N9K-X9736C-FX |
1-Gigabit QSA is not supported on ports 1/29-36. This line card supports the ability to add a fifth Fabric Module to the Cisco N9K-C9504 and N9K-C9508 switches. The fifth Fabric Module can only be inserted into slot 25. |
Table 15 Modular Spine Switch Line Card Compatibility Information
Product ID |
Compatibility Information |
N9K-C9348GC-FXP |
This switch supports the following PSUs: ■ NXA-PAC-350W-PI ■ NXA-PAC-350W-PE ■ NXA-PAC-1100W-PI ■ NXA-PAC-1100W-PE The following information applies to this switch: ■ Incoming FCOE packets are redirected by the supervisor module. The data plane-forwarded packets are dropped and are counted as forward drops instead of as supervisor module drops. ■ When a Cisco N9K-C9348GC-FXP switch has only one PSU inserted and connected, the PSU status for the empty PSU slot will be displayed as "shut" instead of "absent" due to a hardware limitation. ■ The PSU SPROM is not readable when the PSU is not connected. The model displays as "UNKNOWN" and status of the module displays as "shutdown." |
N9K-C93180LC-EX |
This switch has the following limitations: ■ The top and bottom ports must use the same speed. If there is a speed mismatch, the top port takes precedence and bottom port will be error disabled. Both ports both must be used in either the 40 Gbps or 10 Gbps mode. ■ Ports 26 and 28 are hardware disabled. ■ This release supports 40 and 100 Gbps for the front panel ports. The uplink ports can be used at the 100 Gbps speed. ■ Port profiles and breakout ports are not supported on the same port. |
Table 6 Fixed Spine Switches Compatibility Information
Product ID |
Description |
N9K-C9364C |
You can deploy multipod or Cisco ACI Multi-Site separately (but not together) on the Cisco N9K-9364C switch starting in the 3.1 release. You can deploy multipod and Cisco ACI Multi-Site together on the Cisco N9K-9364C switch starting in the 3.2 release. A 930W-DC PSU (NXA-PDC-930W-PE or NXA-PDC-930W-PI) is supported in redundancy mode if 3.5W QSFP+ modules or passive QSFP cables are used and the system is used in 40C ambient temperature or less; for other optics or a higher ambient temperature, a 930W-DC PSU is supported only with 2 PSUs in non-redundancy mode. 1-Gigabit QSA is not supported on ports 1/49-64. This switch supports the following PSUs: ■ NXA-PAC-1200W-PE ■ NXA-PAC-1200W-PI ■ N9K-PUV-1200W ■ NXA-PDC-930W-PE ■ NXA-PDC-930W-PI |
Table 17 Fixed Leaf Switches Compatibility Information
Product ID |
Description |
To connect the Cisco APIC to the Cisco ACI fabric, you must have a 10G interface on the ACI leaf switch. You cannot connect the APIC directly to the N9332PQ ACI leaf switch. |
■ The following table provides MACsec and CloudSec compatibility information for specific hardware:
Table 20 MACsec and CloudSec Support
Product ID |
Hardware Type |
MACsec Support |
CloudSec Support |
N9K-C93108TC-FX |
Switch |
Yes |
No |
N9K-C93180YC-FX |
Switch |
Yes |
No |
N9K-c93216TC-FX2 |
Switch |
Yes |
No |
N9K-C93240YC-FX2 |
Switch |
Yes |
No |
N9K-C9332C |
Switch |
Yes |
Yes, only on the last 8 ports |
N9K-C93360YC-FX2 |
Switch |
Yes |
No |
N9K-C9336C-FX2 |
Switch |
Yes |
No |
N9K-C9348GC-FXP |
Switch |
Yes, only with 10G+ |
No |
N9K-C9364C |
Switch |
Yes |
Yes, only on the last 16 ports |
N9K-X9736C-FX |
Line Card |
Yes |
Yes, only on the last 8 ports |
The following additional MACsec and CloudSec compatibility restrictions apply:
■ MACsec is not supported with 1G speed on Cisco ACI leaf switch.
■ MACsec is supported only on the leaf switch ports where an L3Out is enabled. For example, MACsec between a Cisco ACI leaf switch and any computer host is not supported. Only switch-to-switch mode is supported.
■ When using copper ports, the copper cables must be connected directly the peer device (standalone N9k) in 10G mode.
■ A 10G copper SFP module on the peer is not supported.
■ CloudSec only works with spine switches in Cisco ACI and only works between sites managed by Cisco ACI Multi-Site.
■ For CloudSec to work properly, all of the spine switch links that participate in Cisco ACI Multi-Site must have MACsec/CloudSec support.
■ The current list of protocols that are allowed (and cannot be blocked through contracts) include the following. Some of the protocols have SrcPort/DstPort distinction.
Note: See the Cisco Application Policy Infrastructure Controller Release Notes, Release 4.2(1) for policy information.
— UDP DestPort 161: SNMP. These cannot be blocked through contracts. Creating an SNMP ClientGroup with a list of Client-IP Addresses restricts SNMP access to only those configured Client-IP Addresses. If no Client-IP address is configured, SNMP packets are allowed from anywhere.
— TCP SrcPort 179: BGP
— TCP DstPort 179: BGP
— OSPF
— UDP DstPort 67: BOOTP/DHCP
— UDP DstPort 68: BOOTP/DHCP
— IGMP
— PIM
— UDP SrcPort 53: DNS replies
— TCP SrcPort 25: SMTP replies
— TCP DstPort 443: HTTPS
— UDP SrcPort 123: NTP
— UDP DstPort 123: NTP
■ The Cisco APIC GUI incorrectly reports more memory used than is actually used. To calculate the appropriate amount of memory used, run the "show system internal kernel meminfo | egrep "MemT|MemA"" command on the desired switch. Divide MemAvailable by MemTotal, multiply that number by 100, then subtract that number from 100.
— Example: 10680000 / 24499856 = 0.436 x 100 = 43.6% Free, 100% - 43.6% = 56.4% Used
■ Leaf and spine switches from two different fabrics cannot be connected regardless of whether the links are administratively kept down.
■ Only one instance of OSPF (or any multi-instance process using the managed object hierarchy for configurations) can have the write access to operate the database. Due to this, the operational database is limited to the default OSPF process alone and the multipodInternal instance does not store any operational data. To debug an OSPF instance ospf-multipodInternal, use the command in VSH prompt. Do not use ibash because some ibash commands depend on Operational data stored in the database.
■ When you enable or disable Federal Information Processing Standards (FIPS) on a Cisco ACI fabric, you must reload each of the switches in the fabric for the change to take effect. The configured scale profile setting is lost when you issue the first reload after changing the FIPS configuration. The switch remains operational, but it uses the default port scale profile. This issue does not happen on subsequent reloads if the FIPS configuration has not changed.
FIPS is supported on Cisco NX-OS release 14.2(1) or later. If you must downgrade the firmware from a release that supports FIPS to a release that does not support FIPS, you must first disable FIPS on the Cisco ACI fabric and reload all of the switches in the fabric.
■ You cannot use the breakout feature on a port that has a port profile configured on a Cisco N9K-C93180LC-EX switch. With a port profile on an access port, the port is converted to an uplink, and breakout is not supported on an uplink. With a port profile on a fabric port, the port is converted to a downlink. Breakout is currently supported only on ports 1 through 24.
■ On Cisco 93180LC-EX switches, ports 25 and 27 are the native uplink ports. Using a port profile, if you convert ports 25 and 27 to downlink ports, ports 29, 30, 31, and 32 are still available as four native uplink ports. Because of the threshold on the number of ports (which is maximum of 12 ports) that can be converted, you can convert 8 more downlink ports to uplink ports. For example, ports 1, 3, 5, 7, 9, 13, 15, 17 are converted to uplink ports and ports 29, 30, 31 and 32 are the 4 native uplink ports, which is the maximum uplink port limit on Cisco 93180LC-EX switches.
When the switch is in this state and if the port profile configuration is deleted on ports 25 and 27, ports 25 and 27 are converted back to uplink ports, but there are already 12 uplink ports on the switch in the example. To accommodate ports 25 and 27 as uplink ports, 2 random ports from the port range 1, 3, 5, 7, 9, 13, 15, 17 are denied the uplink conversion; the chosen ports cannot be controlled by the user. Therefore, it is mandatory to clear all the faults before reloading the leaf node to avoid any unexpected behavior regarding the port type. If a node is reloaded without clearing the port profile faults, especially when there is a fault related to limit-exceed, the ports might be in an unexpected mode.
■ When using a 25G Mellanox cable that is connected to a Mellanox NIC, you can set the ACI leaf switch port to run at a speed of 25G or 10G.
■ You cannot use auto-negotiation on the spine switch or leaf switch side with 40G or 100G CR4 optics. For 40G copper transceivers, you must disable auto-negotiation and set the speed to 40G. For 100G copper transceivers, you must disable auto-negotiation on the remote end and set the speed to 100G.
■ A 25G link that is using the IEEE-RS-FEC mode can communicate with a link that is using the CL16-RS-FEC mode. There will not be a FEC mismatch and the link will not be impacted.
This section contains lists of open and resolved issues and known behaviors.
Table 16 Open Issues in This Release
Description |
Exists In |
|
A spine switch fabric module or line card is reloaded unexpectedly due to a kernel panic. The stack trace includes the following statement: |
14.2(1l) and later |
|
Fault F3525 (high SSD usage) is observed. |
14.2(1j) and later |
|
A switch SSD fails in less than two years and needs replacement. The /mnt/pss/ssd_log_amp.log file shows daily P/E cycles increasing by 10 or more each day, and fault "F3525: High SSD usage" is observed. Check the switch activity and contact Cisco Technical Support if the "High SSD usage" fault is raised on the switch. |
14.2(1j) and later |
|
The COPP per interface policy feature has following limitation: The TCAM entry maximum for per interface per protocol is 256. After the threshold is exceeded, a fault will be raised. For more information, see: This enhancement request is to have a CLI command to display the number of required ACLs/TCAM entries for the COPP policy that is applied on a per-interface level. The command should possibly display whether the configuration will succeed or not based on the current overall TCAM usage. |
14.2(1j) and later |
|
Pinging the inband-mgmt of a switch that is running in the ACI mode sometimes fails. This happens between leaf switches and also between leaf switches and spine switches. |
14.2(1j) and later |
|
After disabling unicast routing from a bridge domain, the static pervasive route is still present on the leaf switch. |
14.2(1j) and later |
|
A Cisco ACI node reloads due to a Machine Check Exception similar to the following output: [603029.390562] sbridge: HANDLING MCE MEMORY ERROR [603029.390563] CPU 0: Machine Check Exception: 0 Bank 7: 8c00004000010091 [603029.390564] TSC 0 ADDR 2e3d3f40 MISC 140545486 PROCESSOR 0:50663 TIME 1569464793 SOCKET 0 APIC 0 [603029.390710] sbridge: HANDLING MCE MEMORY ERROR |
14.2(1i) through 14.2(1j) |
|
With N9K-C9348GC-FXP top-of-rack switches, on ports where PoE is enabled in auto mode, there is a potential memory leak that can be seen in fault scenarios, such as the short-ckt fault, overcurrent, or max current. Leaks can also be observed when multiple negotiations occur with a powered device (PD) or when EPG or VLAN information on a PoE interface policy is changed multiple times. |
14.2(1i) through 14.2(1j) |
|
After upgrading a leaf switch, the switch brings up the front panel ports before the policies are programmed. This may cause a connectivity issue if a connected host relies on the link level state to decide whether or not it can forward traffic on a particular NIC or port. The loss duration would be proportional to the scale of configuration policies that must be programmed. |
14.2(1i) through 14.2(1j) |
|
When an ARP request is generated from one endpoint to another endpoint in an isolated EPG, an ARP glean request is generated for the first endpoint. |
14.2(1i) and later |
|
In COOP, the MAC IP address route has the wrong VNID, and endpoints are missing from the IP address DB of COOP. |
14.2(1i) and later |
|
If Cisco ACI Virtual Edge or AVS is operating in VxLAN non-switching mode behind a FEX, the traffic between endpoints in the same EPG will fail when the bridge domain has ARP flooding enabled. |
14.2(1i) and later |
|
CRC errors increment on a leaf switch front panel port, fabric ports, and spine switch ports in a fabric with switches whose model names end with -EX, -FX, or later. |
14.2(1i) and later |
|
When downgrading a Cisco ACI fabric, the OSPF neighbors go down after downgrading the Cisco APICs from a 3.2 or later release to a pre-3.2 release. After the upgrade, the switches are still running a 13.2 or later release. |
14.2(1i) and later |
|
SAN port channel bringup will be unsuccessful when a new vendor switch is connected and the Organizationally Unique Identifier (OUI) of the switch is not present in the OUI list. |
14.2(1i) and later |
|
If a tenant is undeployed from the Multi-Site Orchestrator, the limited VRF instances on the spine switch created for GOLF can be stuck in deletion based on timing. This issue occurs in a topology/ConfigA single pod setup, with a VRF instance and bridge domain that is stretched across sites and GOLF host route is enabled. |
14.2(1i) and later |
|
Copy service traffic will fail to reach the TEP where the copy devices are connected. Traffic will not be seen on the spine switches. |
14.2(1i) and later |
|
A leaf switch experiences an unexpected reload due to a HAP reset. |
14.2(1i) and later |
|
A contract that is provided by an EPG using a bridge domain with subnet X and that is consumed by an L3Out EPG causes a leak of subnet X from VRF B to VRF A. The existing non-pervasive static route in VRF A is replaced by a pervasive route in pointing to spine switch V4 proxy. After the contract leaking subnet A is removed, the pervasive static route persists. |
14.2(1i) and later |
|
When a vPC leg goes down and comes back up, a long traffic drop (1-2 minutes) may occur. |
14.2(1i) and later |
|
BFD sessions keep flapping between a -GX leaf and spine switches. The command "show system internal bfd event-history session" shows multiple instances of the Echo function failing: |
14.2(1i) and later |
|
Some ECMP paths may be flap between "multipath" and "non-multipath." For example, if the configured EBGP MAX ECMP number is 10 and there are 16 BGP ECMP paths for a prefix in the BGP routing table, then 5 paths change between multipath and non-multipath whenever the BGP bestpath calculation is run. |
14.2(1i) and later |
|
There is a system reset due to the sysmgr process failing to re-register with the heartbeat KLM. |
14.2(1i) and later |
|
The VMAC of an ACI SVI is used for an endpoint refresh instead of the PMAC. For example, it can cause an endpoint refresh issue, if the endpoint is reachable through OTV (when HSRP filtration is used and the HSRP MAC is set as a virtual MAC address). |
14.2(1i) and later |
|
The EPMC process crashes continuously. |
14.2(1i) and later |
|
HSRP/VRRP packets failed to flood locally in a service leaf switch, which causes a dual active state. |
14.2(1i) and later |
|
A last hop router does not generate the S,G tree. |
14.2(1i) and later |
|
There is an RPF lookup failure for MRIB. |
14.2(1i) and later |
|
The N2348TQ FEX randomly reboots. A crash in the 'tiburon' and/or 'ethpc' service may be observed in the syslogs immediately prior to the reload event. |
14.2(1i) and later |
|
In Cisco ACI when using MAC pinning with a vPC, prior to reloading when you run the 'show vpc brief' command on the CLI, the command shows that the vPC is passing consistency checks. However, after reloading the leaf switch, the vPC then properly displays the consistency check as 'Not Applicable'. |
14.2(1i) and later |
|
On leaf or spine switches, LDAP authentication requests might get sent out of the out-of-band interface (eth0 or eth6) even though the LDAP provider is configured to use an in-band EPG. |
14.2(1i) and later |
|
N2348TQ tiburon fex randomly reboots. Crash in the 'tiburon' and/or 'ethpc' service may be observed in syslogs immediately prior to reload event. |
14.2(1i) and later |
|
Traffic drops occur if there is redirect traffic towards a remote leaf switch and a Layer 2 service device is deployed behind a remote leaf switch in RL Direct mode. |
14.2(1i) and later |
|
The N9K-C93180YC-EX leaf switch reboots for an unknown reason without any affected services: Last reset Reason: Unknown System version: <VERSION> Service: |
14.2(1i) and later |
|
An interface does not come up when a new link is connected. However, from the DOM data, the signals are present. |
14.2(1i) and later |
|
In a source outside and receiver inside scenario, PIM on the NBL is computing the stripe winner, when there is local receiver. As a result of this, PIM sends out joins over the fabric. However, the route is not currently propagated to the other border leaf switches, which is the expected behavior. |
14.2(1i) and later |
|
In a GOLF setup on a spine switch, when the bridge domain subnets and endpoints are associated to a newer VRF table and the older VRF table is deleted, after changing the VRF table (detaching the old VRF table and attaching a new VRF table) it takes long time (approximately 30 minutes) for host routes of endpoints to be advertised to CSR GOLF. |
14.2(1i) and later |
|
There is a kernel-panic out of memory crash. The following logs appear in the kernel traces: Kernel panic - not syncing: Out of memory: system-wide panic_on_oom is enabled |
14.2(1i) and later |
|
On deleting the virtual MAC address on the APIC, the switch processes will not delete the virtual Mac address info from the database. This results in some of the switch modules using the deleted virtual mac address for host-tracking endpoints and for routing the traffic if the destination MAC address is the virtual MAC address. |
14.2(1i) and later |
|
The SPAN manager process crashes when a SPAN session is deleted. |
14.2(1i) and later |
|
When a physical interface or transceiver is defective or has misconfigured speed, the fault displayed in the GUI has additional whitespaces. |
14.2(1i) and later |
|
Leaf switches crash and generate a core file after invoking the ACI snapshot rollback. |
14.2(1i) and later |
|
A Cisco ACI modular spine switch (N9504 chassis) with redundant supervisor modules (N9K-SUP-A) had an unexpected series of switchovers during a 6 minute period. |
14.2(1i) and later |
|
After removing a transceiver or cable from the interface, the port LED remains green. A port is physically down, but the "show interface" command says that the port is still up. |
14.2(1i) and later |
|
Traffic with a UDP destination port of 8472 is dropped on ingress by the ACI fabric. |
14.2(1i) and later |
|
The iBash "show interface ethernet <portnum>" command does not show CRC and stomped CRC errors. |
14.2(1i) and later |
|
A false positive lifetime endurance fault is generated for a new SSD (model SHMST064G3FECTLP51) on the Cisco APIC. |
14.2(1i) and later |
|
There are no aggregated stats from L3If to L3ExtOut. |
14.2(1i) and later |
|
After upgrading leaf switches and after the switches come online on the target firmware version, reloading the chassis causes a failure to boot and a crash to the Loader> prompt with nothing left in the bootflash from which to boot. |
14.2(1i) and later |
|
An LLDP/CDP MAC address entry gets stuck in the blade switch table on a leaf switch in a vPC. The entry can get stuck if the MAC address flaps and hits the move detection interval, which stops all learning for the address. Use the following command to verify if a switch has a stale MAC address entry: module-1# show system internal epmc bladeswitch_mac all |
14.2(1i) and later |
|
A remote leaf switch is stuck in the "inactive" state after being registered into the fabric. |
14.2(1i) and later |
|
A Cisco ACI leaf switch unexpectedly reloads and generates a core file. |
14.2(1i) and later |
|
10Gbase-ER optics will not link up. |
14.2(1i) and later |
|
When PoE is configured on ports on doing a stateful reload--that is, manually reloading a switch with the config restore option--a PoE power adjust can happen during the same interval when APIC discovery occurs. This results in a PoE core, as it expects the objstore always to return a desired interface object that may have been updated. |
14.2(1i) and later |
|
PIM border leaf switches do not see each other as PIM neighbors, resulting in potentially both being the stripe-winner and sending upstream joins. This can result in duplicate packets. |
14.2(1i) and later |
|
The Netflow (nfm) process crashes during configuration changes. |
14.2(1i) and later |
|
Whenever a switch hits a burst of PCIe, DRAM, or MCE errors, sometimes the device_test process crashes, which can cause the switch to reload. |
14.2(1i) and later |
|
On a border leaf switch, some of the routes that are removed from the routing table are found to be not removed from BGP VPNv4 prefixes. |
14.2(1i) and later |
|
Some of the control plane packets are incorrectly classified as the user class and are reported as dropped in single chip spine switches. The statistics are incorrect because the packets are not actually dropped. |
14.2(1i) and later |
|
When running "show system internal epm endpoint all summary" on an FX leaf, the command output is cut short. |
14.2(1i) and later |
|
There is a memory leak with svc_ifc_streame. |
14.2(1i) and later |
|
Multiple switches crash and generate a core file at same time when applying an NTP policy. |
14.2(1i) and later |
|
The spine outerdstip, which indicates that the egress TEP is connecting to the Tetration network, is not updated when an egress L3Out in the mgmt:inb VRF fails over to a redundant L3Out on another leaf switch. |
14.2(1i) and later |
|
Leaf switch downlinks all go down at one time due to FabricTrack. |
14.2(1i) and later |
|
After a certain set of steps, it is observed that the deny-external-tag route-map used for transit routing loop prevention gets set back to the default tag 4294967295. Since routes arriving in Cisco ACI with this tag are denied from being installed in the routing table, if the VRF table that has the route-tag policy is providing transit for another VRF table in Cisco ACI (for instance and inside and outside vrf with a fw connecting them) and the non-transit VRF table has the default route-tag policy, routes from the non-transit VRF table would not be installed in the transit VRF table. This bug is also particularly impactful in scenarios where transit routing is being used and OSPF or EIGRP is used on a vPC border leaf switch pair. vPC border leaf switches peer with each other, so if member A gets a transit route from BGP, redistributes into OSPF, and then advertises to member B (since they are peers)...without a loop prevention mechanism, member B would install the route through OSPF since it has a better admin distance and would then advertise back into BGP. This VRF tag is set on redistribution of BGP > OSPF and then as a table map in OSPF that blocks routes with the tag from getting installed in the routing table. When hitting this bug, the route-map used for redistributing into OSPF still sets the tag to the correct value. However, the table map no longer matches the correct tag. Rather, it matches the default tag. As a result, member A (could be B) would install the route through OSPF pointing to B. It would then redistribute it back into BGP with the med set to 1. The rest of the fabric (including member B) would install the BGP route pointing to member A since its med is better than the original route's med. |
14.2(1i) and later |
|
In ACI 4.1 releases, FEX port-channel member interfaces (NIF) can no longer be configured as SPAN source interfaces. The following Fault is raised and the SPAN session remains operationally down: F1199: Span source interface sys/phys-[eth1/x] on node xxx in failed state reason Configuration not supported on this TOR. |
14.2(1i) and later |
|
The "get_bkout_cfg failed" error displays when the following vsh_lc cli command is executed: vsh_lc -c "show system internal port-client event-history all" |
14.2(1i) and later |
|
The policy_mgr process on an ACI leaf switch has a memory leak and results in an unexpected reload. The problem can happen over a long period of time, such as a year. Depending on when individual switches were last rebooted, multiple devices could experience the reload at around the same time. |
14.2(1i) and later |
|
Port 1/2 on N9k-C9364C flaps continuously and does not come up. |
14.2(1i) and later |
|
A N9K-X9736PQ linecard in an ACI mode Nexus 9500 spine switch unexpectedly reloads. The following output is seen in the command "show system reset-reason module 1": `show system reset-reason module 1` *************** module reset reason (1) ************* 0) At 2019-12-01T00:00:00.00 Reason: line-card-not-responding Service:Line card not responding => [Failures < MAX] : powercycle Version: |
14.2(1i) and later |
|
After a virtual machine is vMotioned, traffic begins to drop the source from that endpoint. When running "show logging ip access-list internal packet-log deny" on the leaf switch, you can see policy drops for the endpoint. |
14.2(1i) and later |
|
A local AS configuration is not applied to the eBGP neighbor on a Cisco ACI border leaf switch, which results in the switch sending the fabric ASN (configured in the BGP Route Reflector policy) in the OPEN messages, which makes the neighbor reject the session because of the "bad remote-as" reason. |
14.2(1i) and later |
|
Connectivity between a server EPG and external L3Out EPG can be broken for some subnets that are configured with an external subnet for an external EPG. |
14.2(1i) and later |
|
After a link to a Cisco ACI leaf switch flaps, ARP continuously refreshes, and unicast traffic to a neighboring device is non-functional. In a packet capture, the leaf switch continuously sends ARP requests for the neighboring device, even though that device is sending ARP responses. When running "show ip arp vrf tenant:vrf", the age of the ARP entry is always 0 seconds. |
14.2(1i) and later |
|
A vPC pair of leaf switches go into the split brain mode, causing traffic duplication. |
14.2(1i) and later |
|
A modular spine switch gets stuck during upgrade. |
14.2(1i) and later |
|
Error message "No handlers could be found for logger "root"" appears when doing a moquery for certain objects. |
14.2(1i) and later |
|
Some ARP packets get dropped across the Cisco ACI fabric. |
14.2(1i) and later |
|
A leaf switch will crash with a vntag_mgr HAP reset and generate a core file. |
14.2(1i) and later |
|
Traffic destined to a switch is policy dropped. The contracts configured on the switch look correct, but the ELAM drop reason shows a clear SECURITY_GROUP_DENY. If you dump the FPC and FPB pt.index results of the ELAM, the values are different. Specifically, the FPC index is wrong when you check the Stats Idx under the specific ACLQOS rule. FPC should be the summary of the final result. In this case, there are two hits, but there is one stable entry in TCAM and one that is not stable. |
14.2(1i) and later |
|
All routes to a particular spine switch are removed from uRIB on all leaf switches in the fabric. |
14.2(1i) and later |
|
The policy element crashes due to database space exhaustion with B22 FEX devices in the fabric. |
14.2(1i) and later |
|
The pervasive static route is missing on the spine node. |
14.2(1i) and later |
|
A link intermittently flaps on leaf switch fabric ports that are connected to a spine switch. |
14.2(1i) and later |
|
Glean ARP (0xfff2, 239.255.255.240) flood is stopped on the transit leaf switch and is not delivered toward all the leaf switches in the fabric. Thus, silent host discovery does not work. |
14.2(1i) and later |
|
A leaf switch reloaded with an NFM process core. |
14.2(1i) and later |
|
There is a stale pervasive route after a DHCP relay label is deleted. |
14.2(1i) and later |
|
A Cisco ACI leaf switch sends traffic that is untagged for a particular VLAN even though it is configured as trunk (tagged). |
14.2(1i) and later |
|
The policy element crashes once during a misconfiguration. |
14.2(1i) and later |
|
An ARP request from the endpoint behind the remote leaf switch is received on the ToR switch and is flooded to the spine switch as expected (ARP flooding enabled on bridge domain). This can be an issue in cases where the endpoint is behind something such as a Fabric Interconnects, in which it may be expected behavior to delete the MAC address if the endpoint receives the same MAC address back from the upstream leaf switches. |
14.2(1i) and later |
|
A Cisco ACI fabric is not fully fit after a Cisco APIC firmware upgrade. |
14.2(1i) and later |
|
The Cisco N9K-C9316D-GX spine switches encounter a SDKHAL process crash if the route hardware scale limits are exceeded. |
14.2(1i) and later |
|
A leaf switch crashes and reloads due to "nfm hap reset". |
14.2(1i) and later |
|
There are faults for failed contract rules and prefixes on switches prior to the -EX switches. Furthermore, traffic that is destined to an L3Out gets dropped because the compute leaf switches do not have the external prefix programmed in ns shim GST-TCAM. You might also see that leaf switches prior to the -EX switches do not have all contracts programmed correctly in the hardware. |
14.2(1i) and later |
|
When a Cisco N9K-C93180LC-EX, N9K-93180YC-EX, or N9K-C93108TC-EX leaf switch receives control, data, or BUM traffic from the front panel ports with the storm policer configured for BUM traffic, the storm policer will not get enforced. As such, the switch will let all such traffic through the system. |
14.2(1i) and later |
|
If inter-VRF DHCP relay is used, it may be observed that DHCP breaks after performing any activity that causes the client VRF to get removed and re-deployed on the client leaf nodes. |
14.2(1i) and later |
|
If a spine switch's PTEP is configured as the multipod L3Out router ID and the router ID is later changed, the spine switch's PTEP loopback gets deleted and the MP BGP session goes down. |
14.2(1i) and later |
|
The following event can be seen on the spine node: [E4204936][transition][warning][sys] %URIB-4-SYSLOG_SL_MSG_WARNING: URIB-5-RPATH_DELETE: message repeated 1 times in last 220162 sec |
14.2(1i) and later |
|
A Cisco ACI leaf switch reboots due to an ICMPv6 HAP reset. |
14.2(1i) and later |
|
There is an event in which the syslog message is masked and does not provide details about the issue. The main syslog message is not seen, but rate-throttled syslog messages are seen. |
14.2(1i) and later |
|
If a rogue file grows too large, it can cause out of memory condition on a spine switch or leaf switch line card or fabric module without proactively alerting the user to the memory leak, and the line card or fabric module will reload. |
14.2(1i) and later |
|
Paths to L1/L2 devices do not get programmed although they are tracked as up. This happens in an active-standby deployment. |
14.2(1i) and later |
|
The spine node KIC database is missing the v4 default route from RIB. This causes in-band return traffic to drop on the way back to the border leaf nodes. |
14.2(1i) and later |
|
When a Cisco ACI switch is configured in a "maintenance mode" (mmode), a banner is displayed to the user indicating the operating mode of the switch. |
14.2(1i) and later |
|
When walking through SNMP targets, SNMP generates a core file on a spine switch. |
14.2(1i) and later |
|
Zoning-rules are not programmed in the hardware after reloading a switch. |
14.2(1i) and later |
|
Triggered by a physical layer issue, such as fiber or a bad transceiver, a link flap may happen every now and then. However, it is uncommon to have continuous flaps when the node is left unattended over an extended period, such as having 688,000 flaps over a year. Each time after the fabric link flaps, one dbgRemotePort managed object is added to the policyElement database. After a long time flapping like this, unexpected memory allocation and access can be triggered for the Nexus OS process, such as policy_mgr or ethpm. This defect is to enhance the object-store to reduce the impact for such scenarios. |
14.2(1i) and later |
|
VTEP endpoints are learned and set to bounce on some leaf switches. A single VTEP IP address could be seen as local on one vPC pair, but as an IP XR with bounce on another leaf switch pair. |
14.2(1i) and later |
|
A leaf switch crashes due to a routing loop in the IPFIB process. |
14.2(1i) and later |
|
A FEX link takes a long time (5+ minutes) to come up. |
14.2(1i) and later |
|
DHCP unicast renewal ACKs are NOT forwarded across the fabric to clients. This traffic is sourced from port 67 destined to port 68. The regular Discover, Offer, Request, Acknowledge (DORA) process and unicast ACKs function correctly. This traffic is sourced from port 67 destined to port 67. |
14.2(1i) and later |
|
The IPS port is not down when an RX cable is removed on a Cisco ACI leaf switch 1G port. An ACI switch with 1G fiber would signal a peer IOS device, such as a Catalyst 6000 series switch, with flow control auto/desired to turn on the flow control. |
14.2(1i) and later |
|
After an upgrade, for one of the VRF tables, the BGP route map is missing on the spine switch, which results in bridge domain prefixes not being advertised. |
14.2(1i) and later |
|
IPv6 BGP route with recursive next-hop is programmed in the software, but not programmed in the hardware. Traffic destined to this route is blackholed. |
14.2(1i) and later |
|
A stale route map entry is causes unexpected route leaking. |
14.2(1i) and later |
|
A spine switch reloads unexpectedly due to the service on the linecard having a hap-reset. |
14.2(1i) and later |
|
On a modular spine switch, an unconnected port's switching state is disabled, which means it is out of service. The issue is that after reloading a line card, all of the ports on that line card change to switching state enabled, even if the port is not connected to anything. This issue is mostly cosmetic; there is no real impact if an unconnected port has switching state enabled. |
14.2(1i) and later |
|
An IGMPv3 leave causes multicast route OIL to be deleted even when there is an existing receiver subscribed to the group. Multicast traffic interrupted until the existing receivers send a report in response to a general query. |
14.2(1i) and later |
|
After replacing the hardware for a leaf switch, the leaf switch front-panel ports are set to the admin-down state for 45 minutes. |
14.2(1i) and later |
|
A leaf node crashes when PFC or LLFC is enabled on a stretched fabric or a Multi-tier fabric. PFC and LLFC is mainly used for FCoE and RoCE. For a stretched fabric, when a transit leaf node that has connectivity to spine nodes in both locations receives the traffic that matches the QoS class with No-Drop-Cos and PFC enabled, the transit leaf node crashes. For a Multi-tier fabric, when a tier-2 leaf node receives the traffic that matches the QoS class with No-Drop-Cos and PFC enabled, the tier-2 leaf node crashes. |
14.2(1i) and later |
|
External route import for a VRF instance fails on a leaf switch after removing a shared services contract between two EPGs. |
14.2(1i) and later |
|
For a Cisco ACI fabric with more than 128 leaf switches in a given pod, such as 210 leaf switches in a single pod deployment, after enabling PTP globally, only 128 leaf switches are able to enable PTP. The remaining 82 leaf switches fail to enable PTP due to the error F2728 latency-enable-failed. |
14.2(1i) and later |
|
A route profile that matches on community list and sets the local pref and community is not working post upgrade to 5.2.x release. route-map imp-l3out-L3OUT_WAN-peer-2359297, permit, sequence 4201 Match clauses: community (community-list filter): peer16389-2359297-exc-ext-in-L3OUT_WAN_COMMUNITY-rgcom Set clauses: local-preference 200 community xxxxx:101 xxxxx:500 xxxxx:601 xxxxy:4 additive The match clause works as expected, but the set clause is ignored. |
14.2(1i) and later |
|
The sysmgr process crashes unexpectedly, causing the line card to reload. |
14.2(1i) and later |
|
A Cisco ACI leaf switch will reload with the following reset reason: Reset Reason for this card: Image Version : 14.2(7f) Reset Reason (LCM): Unknown (0) at time Tue Mar 22 13:01:28 2022 Reset Reason (SW): Reset triggered due to HA policy of Reset (16) at time Tue Mar 22 12:56:21 2022 Service (Additional Info): pim hap reset Reset Reason (HW): Reset triggered due to HA policy of Reset (16) at time Tue Mar 22 13:01:28 2022 Reset Cause (HW): 0x01 at time Tue Mar 22 13:01:28 2022 Reset internal (HW): 0x00 at time Tue Mar 22 13:01:28 2022 |
14.2(1i) and later |
|
An ACI switch's console may continuously output messages similar to: svc_ifc_eventmg (*****) Ran 7911 msecs in last 7924 msecs |
14.2(1i) and later |
|
After an upgrade, a leaf switch crashes periodically with the following reason: show system reset-reason *************** module reset reason (1) ************* 0) At 2019-09-16T16:36:41.684+01:00 Reason: reset-triggered-due-to-ha-policy-of-reset Service:cdp hap reset Version: 14.2(1i) 1) At 2019-09-16T16:23:43.631+01:00 Reason: reset-triggered-due-to-ha-policy-of-reset Service:cdp hap reset Version: 14.2(1i) |
14.2(1i) |
|
The ACI N93360YC-FX2 leaf switch becomes inactive. |
14.2(1i) |
This section lists the resolved bugs. Click the bug ID to access the Bug Search tool and see additional information about the bug. The "Fixed In" column of the table specifies whether the bug was resolved in the base release or a patch release.
Table 17 Resolved Issues in This Release
Description |
Fixed in |
|
A Cisco ACI node reloads due to a Machine Check Exception similar to the following output: [603029.390562] sbridge: HANDLING MCE MEMORY ERROR [603029.390563] CPU 0: Machine Check Exception: 0 Bank 7: 8c00004000010091 [603029.390564] TSC 0 ADDR 2e3d3f40 MISC 140545486 PROCESSOR 0:50663 TIME 1569464793 SOCKET 0 APIC 0 [603029.390710] sbridge: HANDLING MCE MEMORY ERROR |
14.2(1l) |
|
With N9K-C9348GC-FXP top-of-rack switches, on ports where PoE is enabled in auto mode, there is a potential memory leak that can be seen in fault scenarios, such as the short-ckt fault, overcurrent, or max current. Leaks can also be observed when multiple negotiations occur with a powered device (PD) or when EPG or VLAN information on a PoE interface policy is changed multiple times. |
14.2(1l) |
|
After upgrading a leaf switch, the switch brings up the front panel ports before the policies are programmed. This may cause a connectivity issue if a connected host relies on the link level state to decide whether or not it can forward traffic on a particular NIC or port. The loss duration would be proportional to the scale of configuration policies that must be programmed. |
14.2(1l) |
|
MAC and IP endpoints are not learned on the local vPC pair. |
14.2(1i) |
|
In the 12.2(2i) release, the BPDU filter only prevents interfaces from sending BPDUs, but does not prevent interfaces from receiving BPDUs. |
14.2(1i) |
|
BGP EVPN has the tenant endpoint information, while COOP does not have the endpoint. |
14.2(1i) |
|
In Cisco ACI Multi-Site plus multi-pod topologies, there could be multicast traffic loss for about 30 seconds on the remote-site. If only one LC has fabric links, there are other LCs with no fabric links and the LC with fabric links is reloaded. |
14.2(1i) |
|
When the MTU settings for OSPF neighboring router interfaces do not match, the routers will be stuck in the Exstart/Exchange state. This behavior is expected. This bug is an enhancement to raise a fault to the APIC so that the routers' stuck state can be easily detected by the administrator. |
14.2(1i) |
|
When viewing a congested interface, you do not see any drops in the output of the "show interface" command. If you type "vsh_lc" to drop into the linecard shell, and then view the platform counters for the given port, you can see Buffer Drops on output. |
14.2(1i) |
|
This is an enhancement to decode the binary logs offline directly from the techsupport. |
14.2(1i) |
|
BGP EVPN has the tenant endpoint information, while COOP does not have the endpoint. |
14.2(1i) |
|
Excessive SSD writes are observed by ICMPv6, which can use up to 42GB per day. |
14.2(1i) |
|
There is high SSD utilization on the standby supervisor for a 95xx ACI spine switch. |
14.2(1i) |
|
10-20 second packet loss is observed when the designated forwarder leaf switch comes back online after a reload. |
14.2(1i) |
|
10G AOC shows up as 10G ACU when using passive QSA. There is no functionality impact; this is only a display issue. |
14.2(1i) |
|
An FX linecard unexpectedly reloads due to 'sdkhal hap reset' and an sdkhal core file is generated. |
14.2(1i) |
|
In a setup in which a leaf switch has 2 links to a spine switch, one link might flap a few times. The flapping seems to be triggered by a physical link flap (from the ethpm logs). After the link came up, the IS-IS update never reaches URIB. So, the leaf switch does not send any traffic on this link to the spine switch. The IS-IS database has the routes learned from this spine switch on both links. |
14.2(1i) |
|
With contract-based L3Out QoS classification, the current implementation needs to use different filters for the QoS filter and traffic permission filters. This makes the configuration complicated, and additional TCAM cost is required. |
14.2(1i) |
|
When the WAN optimization flag is enabled or disabled, the bridge domain GIPo on the TOR may not get updated, which results in traffic loss for that bridge domain. |
14.2(1i) |
|
A kernel panic seen in some random scenarios. |
14.2(1i) |
|
While ACI switches are still initializing after an upgrade, TACACS requests are seen coming from the switch IP address, with the remote IP address set to 127.0.0.1 for the admin user. |
14.2(1i) |
|
The system GIPO mroute is not programmed with the system GIPO flag, and interpod BUM traffic for flows that hit the affected switch is dropped upon egress of affected switch. |
14.2(1i) |
|
CDP fails to form on the ACI side. CDP packets are captured in ELAM and SPAN, but not in TCPDUMP on the affected switch. |
14.2(1i) |
|
An ACI FEX HIF interface stays up after the parent switch reloads, crashes, or fails. |
14.2(1i) |
|
Remote Leaf Direct can be enabled for remote leaf switches without adding Routable Ucast. Routable Ucast is necessary for the Remote Leaf Direct feature. UI validation is to remind the configuration of Routable Ucast before enabling Remote Leaf Direct is not present. |
14.2(1i) |
|
After a spine switch upgrade, there is traffic loss for inter-pod traffic. |
14.2(1i) |
|
Traffic on a vPC is affected when the vPC peer is reloaded. |
14.2(1i) |
|
The Hardware Abstraction Layer (HAL) generates a core file. |
14.2(1i) |
|
Fault F0449 gets raised and the ASIC vrm(5) status fails on the Cisco N9K-93108TC-EX or N9K-93180YC-EX switches. |
14.2(1i) |
|
The "vsh -c show system internal epm mem-stats detail" command shows a continuous increase of memory usage for EPM_MEM_epm_dbg_rec_idx_t. This is a necessary condition, but is not sufficient, as there will be increase in memory usage in normal cases due to event history record memory usage. This continuous increase causes the TOR to run out of memory and crash. |
14.2(1i) |
|
There is an EPM crash on a leaf switch that receives the Endpoint Announce packet with a malformed length field. |
14.2(1i) |
|
DHCP offers are dropped on the intermediate leaf switches. |
14.2(1i) |
|
Traffic loss may be observed for flows from local leaf switches to remote leaf switches when one of the remote leaf switches in a vPC is de-commissioned and commissioned again. |
14.2(1i) |
|
Traffic to be flooded in an EPG does not have fabricencap as the VNID in the IVXLAN header. Instead it has the primary VLAN that is configured for the path. |
14.2(1i) |
|
The "show version" command displays the incorrect chassis type for a 1 slot spine chassis. |
14.2(1i) |
|
Downgrading from the 14.1(2) release to an earlier release from the APIC does not complete, and the status in the APIC firmware tab shows as unknown. Reload the node to recover it. |
14.2(1i) |
|
Downgrading from the 14.2(1) release to an earlier release from the APIC does not complete, and the status in the APIC firmware tab shows as unknown. Reload the node to recover it. |
14.2(1i) |
|
An ACI N9K-C9348GC-FXP leaf switch crashes when a DAC cable is connected to SFP+ ports 49-50. The crash reason in the "show version" command is "poe hap reset." |
14.2(1i) |
|
Remote leaf switch shared services local switching traffic drops occur from an orphan endpoint to a vPC endpoint. |
14.2(1i) |
|
Changes to SSH parameters, such as SSH cipher and MAC algorithms, are not reflected on the switch. |
14.2(1i) |
|
On a leaf switch, the "show interface description" command output in the ACI mode does not match the output of the "show int description" command output in the VSH mode. |
14.2(1i) |
|
After a reboot of a leaf switch that is operating as a tier-1 leaf in a multi-tier ACI fabric, the leaf switch will be stuck in a reboot loop. The reason for the loop is because during boot, the sdkhal process crashes. You can see the crash when running "show cores" on the switch. |
14.2(1i) |
|
Traffic is dropped when it is destined to a pervasive route and when the endpoint is not learned. This issue can be also seen on a border leaf switch when "disable remote EP learning" is set. |
14.2(1i) |
|
There is a rare timing issue seen during F5 failover, which triggers a simultaneous local learn on one vPC TOR and a sync update from the peer. This sequence could end up causing an inconsistency in EPMC on one vPC peer where the endpoint ends up pointing to a bounce entry even though it was learned on the front panel. |
14.2(1i) |
|
1) Deploy the breakout configuration. 2) Deploy a port channel or vPC configuration on these broken-out ports. 3) Remove the breakout configuration. The port channel or vPC configuration is still present in the APIC. 4) Deploy the breakout configuration. This action causes a port channel bringup failure, or causes the port channel manager or eth_port_manager to crash on the switch. This issue occurs when the vPC or port channel configuration is present even before the breakout is applied. |
14.2(1i) |
|
An FX2 leaf switch reloads due to reset-requested-due-to-fatal-module-error, specifically due to an sdkhal crash. |
14.2(1i) |
|
Tier 2 switches reboot every 10 minutes. Spine switches also reload once with the same IS-IS core file backtrace. |
14.2(1i) |
|
Spine switches will not export flows in the absence of the controller IP address or if the controller and collectors have a different subnet. |
14.2(1i) |
|
Route-maps used for redistribution into OSPF are not shown when running the "show ip ospf vrf <name>" command in VSH mode. |
14.2(1i) |
|
A GOLF-enabled VRF instance is put into the Down state on the spine switches. This can be confirmed with the "show bgp process vrf <vrf-name>" command from the CLI of the spine switches. Behaviors that may indicate this issue include a loss of reachability to the endpoints in a GOLF-enabled VRF instance and missing routes on the leaf switch for the VRF instance in question. |
14.2(1i) |
|
On an ACI leaf switch, the "show mcp internal event-history trace detail" command shows the receipt of all BPDUs including config BPDUs and TCNs. The type field for TCNs is not correctly reported as type: 80. |
14.2(1i) |
|
This enhancement is to seek the evaluation of introducing the "show stats_manager internal event-history trace" command into the switch techsupport. This is required to troubleshoot atomic counters. |
14.2(1i) |
|
A leaf switch crashes with the "Unknown" reset reason when the breakout ports configuration is re-applied. The reset reason for this switch is as follows: Image Version : 13.2(3o) Reset Reason (LCM): Unknown (0) at time Fri Jul 12 14:21:14 2019 Reset Reason (SW): Reset triggered due to HA policy of Reset (16) at time Fri Jul 12 14:17:40 2019 Service (Additional Info): Reset triggered due to HA policy of Reset |
14.2(1i) |
|
Export counters do not increase, which indicates that no export is happening. |
14.2(1i) |
|
Posting the IPv6 interface configuration (including BFD enable) by using the API in an L3Out results in SVIs using the secondary IP address as the BFD source IP address. This causes the BFD session to fail. |
14.2(1i) |
|
Multiple log files for same component are created under the /var/sysmgr/tmp_logs directory. This happens when the component is transitioned to binary logging. |
14.2(1i) |
|
The Cisco N9K-C93180YC-FX leaf switch sometimes unexpectedly reloads if you run the "show system internal epmc bltrace" command in the vsh_lc mode. |
14.2(1i) |
This section lists bugs that describe known behaviors. Click the Bug ID to access the Bug Search Tool and see additional information about the bug. The "Exists In" column of the table specifies the 14.2(1) releases in which the known behavior exists. A bug might also exist in releases other than the 14.2(1) releases.
Table 18 Known Behaviors in This Release
Bug ID |
Description |
Exists In |
When configuring the output span on a FEX Hif interface, all the layer 3 switched packets going out of that FEX Hif interface are not spanned. Only layer 2 switched packets going out of that FEX Hif are spanned. |
14.2(1i) and later |
|
When output span is enabled on a port where the filter is VLAN, multicast traffic in the VLAN that goes out of that port is not spanned. |
14.2(1i) and later |
|
The show interface command shows the tunnel's Rx/Tx counters as 0. |
14.2(1i) and later |
|
The show vpc brief command displays the wire-encap VLAN Ids and the show interface .. trunk command displays the internal/hardware VLAN IDs. Both VLAN IDs are allocated and used differently, so there is no correlation between them. |
14.2(1i) and later |
|
Continuous "threshold exceeded" messages are generated from the fabric. |
14.2(1i) and later |
|
Switch rescue user ("admin") can log into fabric switches even when TACACS is selected as the default login realm. |
14.2(1i) and later |
|
An extra 4 bytes is added to the untagged packet with Egress local and remote SPAN. |
14.2(1i) and later |
|
When the command show ip ospf vrf <vrf_name> is run from bash on the border leaf, the checksum field in the output always shows a zero value. |
14.2(1i) and later |
|
When an IP address moves from one MAC behind one ToR to another MAC behind another ToR, even though the VM sends a GARP packet, in ARP unicast mode, this GARP packet is not flooded. As a result, any other host with the original MAC to IP binding sending an L2 packet will send to the original ToR where the IP was in the beginning (based on MAC lookup), and the packet will be sent out on the old port (location). Without flooding the GARP packet in the network, all hosts will not update the MAC-to-IP binding. |
14.2(1i) and later |
|
When modifying the L2Unknown Unicast parameter on a Bridge Domain (BD), interfaces on externally connected devices may bounce. Additionally, the endpoint cache for the BD is flushed and all endpoints will have to be re-learned. |
14.2(1i) and later |
|
If an endpoint has multiple IPs, the endpoint will not be aged until all IPs go silent. If one of the IP addresses is reassigned to another server/host, the fabric detects it as an IP address move and forwarding will work as expected. |
14.2(1i) and later |
|
The power supply will not be detected after performing a PSU online insertion and removal (OIR). |
14.2(1i) and later |
|
The access-port operational status is always "trunk". |
14.2(1i) and later |
|
An MSTP topology change notification (TCN) on a flood domain (FD) VLAN may not flush endpoints learned as remote where the FD is not deployed. |
14.2(1i) and later |
|
The transceiver type for some Cisco AOC (active optical) cables is displayed as ACU (active copper). |
14.2(1i) and later |
|
Any TCAM that is full, or nearly full, will raise the usage threshold fault. Because the faults for all TCAMs on leaf switches are grouped together, the fault will appear even on those with low usage. Workaround: Review the leaf switch scale and reduce the TCAM usage. Contact TAC to isolate further which TCAM is full. |
14.2(1i) and later |
|
The default route is not leaked by BGP when the scope is set to context. The scope should be set to Outside for default route leaking. |
14.2(1i) and later |
|
If the TOR 1RU system is configured with the RED fan (the reverse airflow), the air will flow from front to back. The temperature sensor in the back will be defined as an inlet temperature sensor, and the temperature sensor in the front will be defined as an outlet temperature sensor. If the TOR 1RU system is configured with the BLUE fan (normal airflow), the air will flow from back to front. The temperature sensor in the front will be defined as an inlet temperature sensor, and the temperature sensor in the back will be defined as outlet temperature sensor. From the airflow perspective, the inlet sensor reading should always be less than the outlet sensor reading. However, in the TOR 1RU family, the front panel temperature sensor has some inaccurate readings due to the front panel utilization and configuration, which causes the inlet temperature sensor reading to be very close, equal, or even greater than the outlet temperature reading. |
14.2(1i) and later |
|
If Backbone and NSSA areas are on the same leaf, and default route leak is enabled, Type-5 LSAs cannot be redistributed to the Backbone area. |
14.2(1i) and later |
|
Traffic from the orphan port to the vPC pair is not recorded against the tunnel stats. Traffic from the vPC pair to the orphan port is recorded against the tunnel stats. |
14.2(1i) and later |
|
Traffic from the orphan port to the vPC pair is only updated on the destination node, so the traffic count shows as excess. |
14.2(1i) and later |
|
If a bridge domain "Multi Destination Flood" mode is configured as "Drop", the ISIS PDU from the tenant space will get dropped in the fabric. |
14.2(1i) and later |
|
Atomic counters on the border leaf do not increment for traffic from an endpoint group going to the Layer 3 out interface. |
14.2(1i) and later |
|
Atomic counters on the border leaf do not increment for traffic from the Layer 3 out interface to an internal remote endpoint group. |
14.2(1i) and later |
|
TEP counters from the border leaf to remote leaf nodes do not increment. |
14.2(1i) and later |
|
For direct server return operations, if the client is behind the Layer 3 out, the server-to-client response will not be forwarded through the fabric. |
14.2(1i) and later |
|
With the common pervasive gateway, only the packet destination to the virtual MAC is being properly Layer 3 forwarded. The packet destination to the bridge domain custom MAC fails to be forwarded. This is causing issues with certain appliances that rely on the incoming packets’ source MAC to set the return packet destination MAC. |
14.2(1i) and later |
|
BCM does not have a stats option for yellow packets/bytes, and so BCM does not show in the switch or APIC GUI stats/observer. |
14.2(1i) and later |
|
Bidirectional Forwarding Detection (BFD) echo mode is not supported on IPv6 BFD sessions carrying link-local as the source and destination IP address. BFD echo mode also is not supported on IPv4 BFD sessions over multihop or VPC peer links. |
14.2(1i) and later |
|
Traffic is dropped between two isolated EPGs. |
14.2(1i) and later |
|
The iping command’s replies get dropped by the QOS ingress policer. |
14.2(1i) and later |
|
An overlapping or duplicate prefix/subnet could cause the valid prefixes not to be installed because of batching behavior on a switch. This can happen during an upgrade to the 1.2(2) release. |
14.2(1i) and later |
|
EPG statistics only count total bytes and packets. The breakdown of statistics into multicast/unicast/broadcast is not available on new hardware. |
14.2(1i) and later |
|
You must configure different router MACs for SVI on each border leaf if L3out is deployed over port-channels/ports with STP and OSPF/OSPFv3/eBGP protocols are used. There is no need to configure different router MACs if you use VPC. |
14.2(1i) and later |
|
The default minimum bandwidth is used if the BW parameter is set to "0", and so traffic will still flow. |
14.2(1i) and later |
|
The debounce timer is not supported on 25G links. |
14.2(1i) and later |
|
With the N9K-C93180YC-EX switch, drop packets, such as MTU or storm control drops, are not accounted for in the input rate calculation. |
14.2(1i) and later |
|
For traffic coming out of an L3out to an internal EPG, stats for the actrlRule will not increment. |
14.2(1i) and later |
|
When subnet check is enabled, a ToR does not learn IP addresses locally that are outside of the bridge domain subnets. However, the packet itself is not dropped and will be forwarded to the fabric. This will result in such IP addresses getting learned as remote endpoints on other ToRs. |
14.2(1i) and later |
|
SAN boot over a virtual Port Channel or traditional Port Channel does not work. |
14.2(1i) and later |
|
A policy-based redirect (PBR) policy to redirect IP traffic also redirects IPv6 neighbor solicitation and neighbor advertisement packets. |
14.2(1i) and later |
|
The front port of the QSA and GLC-T 1G module has a 10 to 15-second delay as it comes up from the insertion process. |
14.2(1i) and later |
|
If you have only one spine switch that is part of the infra WAN and you reload that switch, there can be drops in traffic. You should deploy the infra WAN on more than one spine switch to avoid this issue. |
14.2(1i) and later |
|
Slow drain is not supported on FEX Host Interface (HIF) ports. |
14.2(1i) and later |
|
In the case of endpoints in two different TOR pairs across a spine switch that are trying to communicate, an endpoint does not get relearned after being deleted on the local TOR pair. However, the endpoint still has its entries on the remote TOR pair. |
14.2(1i) and later |
|
Bridge domain subnet routes advertised out of the Cisco ACI fabric through an OSPF L3Out can be relearned in another node belonging to another OSPF L3Out on a different area. |
14.2(1i) and later |
|
After upgrading a switch, Layer 2 multicast traffic flowing across PODs gets affected for some of the bridge domain Global IP Outsides. |
14.2(1i) and later |
|
There is a traffic blackhole that lasts anywhere from a few seconds to a few mins after a border leaf switch is restored. |
14.2(1i) and later |
|
When downgrading a Cisco ACI fabric, the OSPF neighbors go down after downgrading the Cisco APICs from a 3.2 or later release to a pre-3.2 release. After the upgrade, the switches are still running a 13.2 or later release. |
14.2(1i) and later |
|
During an upgrade on a dual-SUP system, the standby SUP may go into a failed state. |
14.2(1i) and later |
|
Output packets that are ERSPAN'd still have the PTP header. Wireshark might not be able to decode the packets, and instead shows frames with ethertype 0x8988. |
14.2(1i) and later |
|
There is a policy drop that occurs with L3Out transit cases. |
14.2(1i) and later |
■ IPN should preserve the CoS and DSCP values of a packet that enters IPN from the ACI spine switches. If there is a default policy on these nodes that change the CoS value based on the DSCP value or by any other mechanism, you must apply a policy to prevent the CoS value from being changed. At the minimum, the remarked CoS value should not be 4, 5, 6, or 7. If CoS is changed in the IPN, you must configure a DSCP-CoS translation policy in the APIC for the pod that translates queuing class information of the packet into the DSCP value in the outer header of the iVXLAN packet. You can also embed CoS by enabling CoS preservation. For more information, see the Cisco APIC and QoS KB article.
■ The following properties within a QoS class under "Global QoS Class policies," should not be changed from its default value and is only used for debugging purposes:
— MTU (default – 9216 bytes)
— Queue Control Method (default – Dynamic)
— Queue Limit (default – 1522 bytes)
— Minimum Buffers (default – 0)
■ The modular chassis Cisco ACI spine nodes, such as the Cisco Nexus 9508, support warm (stateless) standby where the state is not synched between the active and the standby supervisor modules. For an online insertion and removal (OIR) or reload of the active supervisor module, the standby supervisor module becomes active, but all modules in the switch are reset because the switchover is stateless. In the output of the show system redundancy status command, warm standby indicates stateless mode.
■ When a recommissioned APIC controller rejoins the cluster, GUI and CLI commands can time out while the cluster expands to include the recommissioned APIC controller.
■ If connectivity to the APIC cluster is lost while a switch is being decommissioned, the decommissioned switch may not complete a clean reboot. In this case, the fabric administrator should manually complete a clean reboot of the decommissioned switch.
■ Before expanding the APIC cluster with a recommissioned controller, remove any decommissioned switches from the fabric by powering down and disconnecting them. Doing so will ensure that the recommissioned APIC controller will not attempt to discover and recommission the switch.
The following list describes IGMP snooping known behaviors:
■ Multicast router functionality is not supported when IGMP queries are received with VxLAN encapsulation.
■ IGMP Querier election across multiple Endpoint Groups (EPGs) or Layer 2 outsides (External Bridged Network) in a given bridge domain is not supported. Only one EPG or Layer 2 outside for a given bridge domain should be extended to multiple multicast routers if any.
■ The rate of the number of IGMP reports sent to a leaf switch should be limited to 1000 reports per second.
■ Unknown IP multicast packets are flooded on ingress leaf switches and border leaf switches, unless "unknown multicast flooding" is set to "Optimized Flood" in a bridge domain. This knob can be set to "Optimized Flood" only for a maximum of 50 bridge domains per leaf switch.
If "Optimized Flood" is enabled for more than the supported number of bridge domains on a leaf, follow these configuration steps to recover:
— Set "unknown multicast flooding" to "Flood" for all bridge domains mapped to a leaf switch.
— Set "unknown multicast flooding" to "Optimized Flood" on needed bridge domains.
■ Traffic destined to Static Route EP VIPs sourced from N9000 switches (switches with names that end in -EX) might not function properly because proxy route is not programmed.
■ An iVXLAN header of 50 bytes is added for traffic ingressing into the fabric. A bandwidth allowance of (50/50 + ingress_packet_size) needs to be made to prevent oversubscription from happening. If the allowance is not made, oversubscription might happen resulting in buffer drops.
The following list describes IpEpg (IpCkt) known behaviors:
■ An IP/MAC Ckt endpoint configuration is not supported in combination with static endpoint configurations.
■ An IP/MAC Ckt endpoint configuration is not supported with Layer 2-only bridge domains. Such a configuration will not be blocked, but the configuration will not take effect as there is no Layer 3 learning in these bridge domains.
■ An IP/MAC Ckt endpoint configuration is not supported with external and infra bridge domains because there is no Layer 3 learning in these bridge domains.
■ An IP/MAC Ckt endpoint configuration is not supported with a shared services provider configuration. The same or overlapping prefix cannot be used for a shared services provider and IP Ckt endpoint. However, this configuration can be applied in bridge domains having shared services consumer endpoint groups.
■ An IP/MAC Ckt endpoint configuration is not supported with dynamic endpoint groups. Only static endpoint groups are supported.
■ No fault will be raised if the IP/MAC Ckt endpoint prefix configured is outside of the bridge domain subnet range. This is because a user can configure bridge domain subnet and IP/MAC Ckt endpoint in any order and so this is not error condition. If the final configuration is such that a configured IP/MAC Ckt endpoint prefix is outside all bridge domain subnets, the configuration has no impact and is not an error condition.
■ Dynamic deployment of contracts based on instrImmedcy set to onDemand/lazy not supported; only immediate mode is supported.
The following list describes direct server return (DSR) known behaviors:
■ When a server and load balancer are on the same endpoint group, make sure that the Server does not generate ARP/GARP/ND request/response/solicits. This will lead to learning of LB virtual IP (VIP) towards the Server and defeat the purpose of DSR support
■ Load balancers and servers must be Layer 2 adjacent. Layer 3 direct server return is not supported. If a load balancer and servers are Layer 3 adjacent, then they have to be placed behind the Layer 3 out, which works without a specific direct server return virtual IP address configuration.
■ Direct server return is not supported for shared services. Direct server return endpoints cannot be spread around different virtual routing and forwarding (VRF) contexts.
■ Configurations for a virtual IP address can only be /32 or /128 prefix.
■ Client to virtual IP address (load balancer) traffic always will go through proxy-spine because fabric data-path learning of a virtual IP address does not occur.
■ GARP learning of a virtual IP address must be explicitly enabled. A load balancer can send GARP when it switches over from active-to-standby (MAC changes).
■ Learning through GARP will work only in ARP Flood Mode.
The Cisco Application Policy Infrastructure Controller (APIC) documentation can be accessed from the following website:
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.
© 2019-2024 Cisco Systems, Inc. All rights reserved.