Overview
Cisco IOS® XRd complements existing physical Cisco® router platforms that rely on Cisco IOS XR Software, such as Cisco Network Convergence System routers, Cisco ASR 9000 Series Routers, and Cisco 8000 Series Routers. Service providers can now improve their operations and offerings by leveraging XRd platform to offer new services that require routing functionality in a compact manner. XRd offers the option to deploy routing in a containerized form-factor enabling routing between network functions in the public cloud and those running on-premises in data centers. This deployment offers flexibility as an increasing number of network functions are moved to the public cloud. The Cisco IOS XRd offers greater agility, improved network efficiency at a lower capital cost and operational expenditures. It also provides the ability to efficiently scale up or scale down the network capacity based on demand.
Features of Cisco IOS XRd Router
XRd runs on generic Kubernetes, and can be integrated into your Kubernetes-based platforms such as Amazon AWS EKS (Elastic Kubernetes Service), as described in this document. The XRd software provides the capability to administer routing functionality, similar to other applications operating within the data center or cloud environment.
XRd software, based on IOS XR7 release, is of lightweight design and expedited boot process. XRd supports contemporary operating system programmability features, including YANG models (both native and open configuration) and Model-Driven Telemetry.
XRd software is a derivative of the highly resilient, stable, and feature-rich Cisco IOS XR Software, possessing the same northbound interfaces and management features as Cisco IOS-XR. As a result, XRd seamlessly integrates with the existing monitoring, automation, and orchestration systems.
Cisco IOS XRd is available as:
-
Cisco IOS XRd Control Plane
-
Cisco IOS XRd vRouter
For more details, see Cisco IOS XRd Data Sheet.
Known Issues
The following table addresses known issues.
Issue |
Description |
Condition |
Workaround |
---|---|---|---|
Traffic Reconvergence Delay during XRd vRouter Cloud HA Failover |
This issue impacts the XRd vRouter Cloud HA solution, causing a delay in traffic reconvergence when transitioning from the primary to the secondary XRd vRouter instance. The duration can extend up to the ARP cache timeout on the client workload, which is 4 hours for Cisco IOS XR-based workloads. In contrast, Linux-based workloads have a default ARP cache timeout of 30 seconds. |
The likelihood of encountering the issue increases when initiating failover immediately after launching or restarting the EC2 instance hosting the Secondary XRd vRouter instance. Standard prerequisites for the XRd vRouter Cloud Hgh Availability solution include the presence of the XRd HA App container application in the Amazon AWS EKS (Elastic Kubernetes Service) cluster and the configuration of Linux networking, VRRP, and Telemetry settings in Cisco IOS XR. |
To address the issue, implement the following workaround: clear the ARP cache of the workload after an unsuccessful failover. |
Unresponsiveness and Constant Restarting of XRd Control Plane and XRd vRouter Following a Container Restart |
Following a container restart, the XRd Control Plane or XRd vRouter becomes unresponsive, rendering usual management protocols such as SSH, NETCONF, and gNMI ineffective. While interaction through the container orchestrator,which is the kubectl command, remains possible, numerous XR processes are in a constant state of restarting, observable through the logs from the kubectl command. It is crucial to distinguish between container restart and pod restart; the former occurs due to device catastrophic failure, while the latter is part of expected lifecycle operations. This issue, linked to a separate catastrophic trigger, is attributed to an upstream environment problem, possibly originating from issues within the Linux kernel or the container runtime. |
The affected platforms are XRd Control Plane and XRd vRouter, across all releases of Cisco IOS XR. This issue also affects all Amazon EKS versions from 1.23 to 1.29, specifically on worker nodes that run the XRd-optimized Amazon Machine Image (AMI), which is based on Amazon's EKS-optimized AMI. |
As a workaround for the issue, the pod can be recovered by restarting it. This can be accomplished by using the kubectl restart sts/'<sts-name> command. |