The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
Introduction
If you have a private cloud, you might run part of your workload on a public cloud. However, migrating workload to the public cloud requires working with a different cloud provider interface and learning different ways to set up connectivity and define security policies. Meeting these challenges can result in increased operational cost and loss of consistency. Cisco Cloud Network Controller can be used to solve these problems by extending a Cisco Multi-Site fabric to Amazon Web Services (AWS), Microsoft Azure, or Google public clouds. You can also mix AWS, Azure, and Google Cloud in your deployment.
This document describes the features, issues, and limitations for the Cisco Cloud Network Controller software. For the features, issues, and limitations for the Cisco APIC, see the appropriate Cisco Application Policy Infrastructure Controller Release Notes. For the features, issues, and limitations for the Cisco Multi-Site Orchestrator/Nexus Dashboard Orchestrator, see the appropriate Cisco Nexus Dashboard Orchestrator Release Notes.
For more information about this product, see "Related Content."
Note: The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product.
Date |
Description |
October 8, 2022 |
Release 25.1(1e) became available. |
Description |
|
Support for importing existing Google Cloud brownfield VPCs into Cisco Cloud Network Controller |
This release provides support for importing existing Google Cloud brownfield VPCs into Cisco Cloud Network Controller. For more information, see Importing Existing Brownfield Google Cloud VPCs Into Cisco Cloud Network Controller. |
Support for applying filters to the Google Cloud Statistics.
|
Beginning with Cisco Cloud Network Controller Release 25.1(1), you can apply statistic filters to the GCP flow logs. For more information, see Cisco Cloud Network Controller for Google Cloud User Guide, Release 25.1(x). |
Catalyst 8000V license register/unregister improvements |
This release provides improvements on the license registration and unregistration process for the Catalyst 8000V. |
Multicloud scale improvements for AWS and Azure clouds |
This release provides multicloud scale improvements for AWS and Azure clouds. For more information, see Cisco Cloud Network Controller Verified Scalability Guide, Release 25.1(1). |
Supported Upgrade Paths
Cisco Cloud Network Controller supports policy-based upgrades for the following upgrade paths:
· For AWS and Azure:
o Release 5.2(1) to 25.1(1)
o Release 25.0(1) to 25.1(1)
o Release 25.0(2) to 25.1(1)
o Release 25.0(3) to 25.1(1)
· For AWS, Azure, and Google Cloud:
o Release 25.0(4) to 25.1(1)
o Release 25.0(5) to 25.1(1)
Changes in Behavior
There are no changes in behavior in this release.
Open Issues
Click the bug ID to access the Bug Search tool and see additional information about the bug. The "Exists In" column of the table specifies the 25.1(1) releases in which the bug exists. A bug might also exist in releases other than the 25.1(1) releases.
Bug ID |
Description |
Exists in |
The "Cloud Access Privilege" for certain brownfield Cloud Context Profiles in the tabular view under "Application Management" may sometime show as "Not Applicable" |
25.1(1e) and later |
|
When we scale up CCRs or manage the CCRs in a region within 1 min after CCR scale down or unmanaging CCRs in a region, the new CCRs will not be programmed with licenses. A fault on the UI will indicate that the CCR is not programmed with licenses and the hcplatformlicense operational state is down. |
25.1(1e) and later |
|
Flow filter options may still be available in the statistics chart dropdown after the filter is deleted. |
25.1(1e) and later |
|
Existing Traffic Blackhole when Catalyst 8000Vs are added to a new region. |
25.1(1e) and later |
|
In Scale setup (with ~300 Spoke VNets), deploying the CCR on non-home region took more than 4 hours. The delay is in Azure for creating ~300 VNet peerings and subnets. In Azure activity log of the resource group, we can see there are subnet creation retry-able failures, those failures are due to the previous subnet creation not finished, those failures are ok and retries will eventually succeed. |
25.1(1e) and later |
|
When you upgrade from release 25.0(4) to 25.0(5k) on a Cloud Network Controller for Google Cloud with CCRs deployed, if the underlay to the on-premises site is changed from public internet to private peering with IPSec enabled after the upgrade, the existing OSPF sessions will be down and moved to the init state. |
25.0(5k) and later
|
|
When we move from private peering to public internet, stale rules are present on GCP. These are specific IP rules. When we move from private peering to public internet, NDO does not delete the cloudtemplateNextHopIp MOs (which represents the onprem tep pool). In the case of AWS and Azure, the Cloud Network Controller detects if private peering is static (no-IPsec) or dynamic (IPsec enabled) and sets the route reachability on BgpPeerP as Static or Dynamic. Today we program static routes in Cisco Cloud Routers and create firewall rules in AWS and Azure only if the route reachability is Static. We delete static routes in Cisco Cloud Routers and delete firewall rules if the route reachability changes to Dynamic. This behavior was causing issues in GCP since we always need to program firewall rules for GCP, for private peering. Since cloudtemplateNextHopIp is never deleted, we cannot depend on the existence or deletion of this MO to program or cleanup firewall rules on GCP. We need to introduce a new type for GCP private peering so that we can still program firewall rules but not program static routes in Cisco Cloud Routers if IPsec is enabled. Currently in GCP, the firewall rules depend on the existence or deletion of cloudtemplateNextHopIp MOs. As they are never deleted by NDO, rules are never deleted on the Cloud Network Controller. |
25.0(5k) and later |
|
In some cases, the BGP session between the on-premises router and the Cisco Catalyst 8000V router in the cloud is down. Although the crypto sessions and tunnels are up, there could be some traffic loss when this happens. |
25.0(5k) and later |
|
In some cases, the Public IP address and the operational state of the network interfaces of Cisco Catalyst 8000V on AWS is seen as "N/A'. In rare cases, we may see this issue on other cloud sites as well. |
25.0(5k) and later |
|
After triggering the upgrade for Cisco Cloud Routers, a few routers may continue to be running the older version. |
25.0(5k) and later |
|
On AWS Cloud Network Controller with transit gateway Connect enabled, under any of of the following conditions - 1. If the Infra AWS account is not cleaned before deploying the Cloud Network Controller and if there are any stale transit gateway Connect peers on the stale transit gateway Connect that is not cleaned 2. If the new transit gateway Connect peers are assigned the same inner IP as that of the stale transit gateway Connect peers Then the new transit gateway Connect peers will fail until the stale transit gateway Connect peers are deleted. When this happens, the Cloud Network Controller should raise a fault, but currently the fault is not raised by the Cloud Network Controller. |
25.0(5k) and later |
|
On the Azure Cloud Network Controller user interface, the BGP and OSPF sessions are seen as "down" after upgrading to the 25.0(5k) release version. On a fresh deployment, this issue will not be seen. |
25.0(5k) and later |
|
Cloud subnet in a VRF not reachable from on-premises device or from a device in another cloud in another VRF. |
25.0(5k) and later |
|
You might not be able to SSH to the Cisco Cloud Network Controller on Azure after launching it for the first time during first 5-10 minutes of system bringup. |
25.0(5k) and later |
|
Cisco Catalyst 8000V License is not getting deregistered from Cisco Smart License Server as the Cisco Catalyst 8000V lose connection to the internet before getting deleted. |
25.0(4k) and later |
|
No functional issue, stale licenses are configured on the Cisco Catalyst 8000V. On programming a T2 license on the Cisco Catalyst 8000V, a stale license entry is seen on both the Cisco Catalyst 8000V and the Cisco Smart account.
|
25.0(4k) and later |
|
The following fault appears on the Cisco Cloud APIC indicating that the license is not configured on the CCR: Oper State of HcplatformLicense is down with administrative-down. |
25.0(4k) and later |
|
When staying on the VRFs table, the VRF internal/external statuses sometimes do not update for a long time. |
25.0(4k) and later |
|
In this case the HcloudCtxOper of infra VNet, which is used to deploy NLB/ALB, is down. Since one of the network interface associated with the Vnet is in a failed state, Cisco Cloud APIC is unable to add a new Cidr to the VNet. |
25.0(4k) and later |
|
When BFD is enabled between sites, the BFD sessions go down and come back up quickly with no user intervention/trigger. |
25.0(4k) and later |
|
In a few specific cases, on importing a brownfield VNet with a Routing & Security access policy, an NSG might not get created and/or VNets might not peer. |
25.0(4k) and later |
|
Disabling "Hub peering" triggers a CSR deployment, which restricts the CSR count and increases the bandwidth together in a single action. |
25.0(3k) and later |
|
When the brownfield VPC is imported into Cisco Cloud APIC, you need to take care of the creation and management of the route table and route table entries, security group rules, and transit gateway VPC attachment. After creating the transit gateway VPC attachment with the infra transit gateway for the brownfield VPC in the AWS console, the corresponding cloudCtxPeerOper for the brownfield cloud context profile will move from Failed state to Configured state. After that, if the created transit gateway attachment for the brownfield VPC is deleted in the AWS console, cloudCtxPeerOper is not moving back to Failed state. |
25.0(2e) and later |
|
When the brownfield VPC is imported into Cisco Cloud APIC via REST API POST, a new cloud context profile is created. Under this cloud context profile, cloudRsCtxToAccessPolicy is created, which is in relation to read only access policy. One or more cloudCIDRs and cloudBrownfield with cloudIDMapping, which holds the VPC ID, is posted to Cloud APIC. cloudIDMapping, which contains the VPC ID, points to the VPC present in the cloud. If the VPC ID is non-existent or if there is any difference between the cloudCIDRs posted vs the CIDRs present in the VPC, cloudCtxOper and cloudCidrOper moves to a Failed state. But because of the delegate's distinguished name, the imported unmanaged VPC shows healthy. |
25.0(2e) and later |
|
UI dashboard shows the wrong status for inter-region connectivity. |
25.0(2e) and later |
|
An external endpoint group of type non internet cannot leak all routes(or public internet IPs) to a cloud endpoint group. This results in creating a static route in the Vnet route table to point to CSR NLB for that destination CIDR. If it's leak-all, we configure 0.0.0.0/0 to point to CSR NLB. In case of Azure Cloud APIC we do not create a static route to internet, Azure does this implicitly by default. Cloud APIC only programs the rules. When a user creates a contract or route leak, Cloud APIC programs a static route to CSR NLB. This overrides the Azure implicit default route to internet if the CIDR overlaps with the pubic internet IPs, since the leak-all creates a 0.0.0.0/0 route to point to CSR NLB. This is an invalid configuration unless the user intended it, i,e the user intended to login to the VM through it's private IP through CSR. If not the user has to leak specific routes to the cloud endpoint group. |
25.0(2e) and later |
|
This issue is hit in some cases where Cloud APIC is unable to deploy infra configuration, such as creating cloud routers in overlay-1 VPC. This is sometimes seen in new deployments, but not in the case of upgrade scenario. Cloud routers and other configurations do not get deployed in Google Cloud. |
25.0(2e) and later |
|
Transit gateway external connectivity is not getting deployed in regions where cloud context profiles are deployed. |
25.0(2e) and later |
|
When an IKEv1 tunnel is configured to a destination while another tunnel already exists to the same destination but with a different source interface, this tunnel will remain with the protocol shown as down. |
25.0(2e) and later |
|
This issue occurs if there is a misconfiguration done where the local subnet was provided under routes leaked from external VRF to internal VRF. This is an Azure Cloud APIC only issue, since AWS does not allow programming routes that overlap with VPC's CIDR. |
25.0(2e) and later |
|
When the tunnels are created from source interface Gig 2 or 4, we put allow all for security group. In future the allow all rule needs to be removed and provide explicit IP needs to be allowed. |
25.0(2e) and later |
|
This is not a functional issue. There will be no fault shown in the Cloud APIC UI if the border gateway protocol sessions of the transit gateway external connectivity are down. |
25.0(2e) and later |
|
If Cloud APIC is rebooted, in a rare case an expected rule may not be programmed on the cloud due to a timing mismatch in the reconcile and programming workflow. |
25.0(2e) and later |
|
In VPC route table, the route table entry for a destination CIDR pointing to transit gateway is sometimes missing when a quick delete and add of tenant or contract is done or when we move from transit gateway connect to legacy transit gateway solution. This happens only with legacy transit gateway solution in either of the cases. This is a timing issue with the legacy transit gateway solution, where we create two transit gateways per hub network in a region. This can happen either if we move to legacy transit gateway solution or if legacy transit gateways are coming up for the first time. These conditions result in deleting the route table entry for a given destination CIDR and adding back the same entry at the same time. Due to an issue with the AWS API which returns a deleted route table entry as a non deleted entry, Cloud APIC deletes the wrong entry. |
25.0(2e) and later |
|
TACACS monitoring of the destination group is not supported through the GUI. |
25.0(1c) and later |
|
Stats seen on Cisco Cloud APIC are sometimes not in sync with Azure stats. |
25.0(1c) and later |
|
Adding an EPG endpoint selector fails with an error message saying the selector is already attached. |
25.0(1c) and later |
|
Route nextHop is not set to the redirect service node specified in the service graph. |
25.0(1c) and later |
|
When the CSR bandwidth needs to be increased, the user needs to undeploy all the CSRs in all the regions and redeploy with the desired bandwidth, which can cause traffic loss. |
25.0(1c) and later |
|
When the "AllowAll" flag is enabled on a service device such as a native load balancer or on the logical interface of a third party device, it is possible that to see some specific rules apart form a rule that allows all traffic from any source to any destination. |
25.0(1c) and later |
|
The eventmgr crashes when handling a fault triggered by a new cloud account. |
25.0(1c) and later |
|
The Cisco Cloud APIC GUI shows the total allowed count for CtxProfile, VRF (fvCtx), EPGs, and contracts. These numbers have been validated only for Azure-based deployments. For AWS deployments, the numbers supported are much lower. |
25.0(1c) and later |
|
Cloud routers may not get created if external network objects are not configured. External network configuration is required for configuring cloud routers. |
25.0(1c) and later |
|
Cisco Cloud APIC in this release limits the number of regions where we can deploy the hubnetwork in order to establish external connectivity. When you attempt to deploy/configure hubnetwork in more than four regions, the configuration will be rejected with the following error: Invalid Configuration CT_INTNETWORK_REGION_MAXIMUM: At present, there can be at most 4 cloudRegionName in cloudtemplateIntNetwork uni/tn-infra/infranetwork-default/intnetwork-default; current count = <total-hubnetwork-regions-attempted> |
25.0(1c) and later |
|
Customers are restricted to shorter key value pairs than they need to be. |
25.0(1c) and later |
|
Incorrect DNS server is configured on Cisco Cloud APIC with Google Cloud. Though this is not directly used when deploying Cisco Cloud APIC with Google Cloud, an incorrect IP address is configured. |
25.0(1c) and later |
|
When delete followed by add operations of tenant or other resources (such as VPCs and contracts) are done within a short span of time, it is possible that the resource deployment in Google Cloud may get out of sync with the configuration on Cisco Cloud APIC. The likelihood of this happening is directly proportionate to the scale of configuration and how quickly the operations are done. We may see resources either not created or not deleted on Google Cloud to match the user configuration on Cisco Cloud APIC. |
25.0(1c) and later |
|
When performing a Cisco Cloud APIC upgrade (but not also performing a CSR upgrade), before the upgrade is finished and when the Cisco Cloud APIC is reconciling the CSR configurations, if you delete certain configurations and add the same configurations back (for example, if you delete a VRF and add the VRF back), a traffic drop may happen. Eventually it should recover. |
25.0(1c) and later |
|
When you scale up the number of CSRs or routers per region, some of the configurations may be missing on the newly created CSR. This issue happens randomly on the newly created CSRs, in this case tunnels or BGP sessions on the new CSRs may be down due to missing configuration. |
25.0(1c) and later |
Resolved Issues
Click the bug ID to access the Bug Search tool and see additional information about the bug. The "Fixed In" column of the table specifies whether the bug was resolved in the base release or a patch release.
Bug ID |
Description |
Fixed in |
In the "Cloud Resources" section of the GUI, the names displayed in the "Name" column are not the same as the name of resources on the cloud. These are showing the Cloud APIC object names. |
25.1(1e) |
|
The route between the internal and external VRF is not programmed on the CSR. It is expected to be configured on CSR as a part of the leakTo subnet configuration. |
25.1(1e) |
|
In the TGW l3out configuration, modifying the IPsec tunnel pre-shared key is not supported. UI does not allow the modification of the IPsec tunnel pre-shared key, as this field is grayed out. API accepts this modification of the IPsec tunnel pre-shared key, but it's not updated on the AWS. |
25.1(1e) |
|
Sometimes the Cloud APIC CSR/CCR "Instance Status" in "Cloud Resources - >Instances -> is shown as " Not-applicable". |
25.1(1e) |
|
On Azure, it takes a long time for Cisco Catalyst 8000Vs to become ready when deployed in a new region. As a result, the inter-site tunnels take time to come up, delaying the traffic. |
25.1(1e) |
|
When we have a contract whose resource name does not match regex '(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?)', it raises a fault with the following message, "Must be a match of regex '(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?)', invalid", as received from the specific cloud. The details on the exact mismatch are missing from the fault. |
25.1(1e) |
|
Cloning of route-table entries to Cloud APIC route-table does not port route-table associations with other entities, such as managed-prefix-lists in AWS, SecurityGroup in Azure, and other such entities. This issue appears when the user imports a VPC/VNet into Cloud APIC in Routing Only mode or Routing & Security mode and clones the route-table(s) into an already-imported VPC/VNet. |
25.1(1e) |
|
One or more Cisco Catalyst 8000Vs might not get deployed on a loaded system (a system with scale configuration). |
25.1(1e) |
Known Issues
Click the bug ID to access the Bug Search tool and see additional information about the bug. The "Exists In" column of the table specifies the 25.1(1) releases in which the bug exists. A bug might also exist in releases other than the 25.1(1) releases.
Bug ID |
Description |
Exists in |
Statistics page for Azure stats might contain missing data in the GUI when Azure cloud reports data incorrectly. |
25.1(1e) and later |
|
If the Cloud APIC infra CIDR has a collision with the reserved CIDR 172.17.0.0/16, connectivity to the Cat8kv VMs from Cloud APIC VM might fail. If the connectivity fails, the configuration push to Cat8kv will fail and Cat8kv will remain unreachable from Cloud APIC. A fault will be raised in the Cloud APIC. Currently 172.17.0.0/16 is reserved by Cloud APIC and it cannot be used as Infra CIDR. 172.17.0.0/16 is used by the docker network running on Cloud APIC. |
25.0(3k) and later |
|
When a cloudExtEpg matches on a 0/0 network and has a bi-directional contract with two cloud EPGs, such as cloudEpg1 and CloudEpg2, this can result in inadvertent communication between endpoints in cloudEpg1 and cloudEpg2 without a contract between the two EPGs themselves. |
25.0(1c) and later |
|
Logs are lost upon stopping the Cloud APIC instance. |
25.0(1c) and later |
|
There is traffic loss after a Cloud APIC upgrade. Traffic will eventually converge, but this could take a few minutes. |
25.0(1c) and later |
|
Creating VPN connections fail with the "invalidCidr" error in AWS or the "More than one connection having the same BGP setting is not allowed" error in Azure. |
25.0(1c) and later |
|
When a fault is raised in the Cloud APIC, the fault message will be truncated and will not include the entire cloud message description. |
25.0(1c) and later |
|
REST API access to the Cloud APIC becomes delayed after deleting a tenant with scaled EPGs and endpoints. The client needs to retry after receiving the error. |
25.0(1c) and later |
|
The Ctx Oper managed object is not deleted after the attachment is deleted. |
25.0(1c) and later |
|
Traffic gets dropped after downgrading to the 5.0(1) release. Cloud Services Router has incompatible configurations due to an issue with reading configurations using SSH. |
25.0(1c) and later |
|
On the Dashboard, fewer VNet peerings are shown than expected. |
25.0(1c) and later |
|
When an invalid Cloud Services Router license token is configured after initially configuring a valid token, the Cloud Services Router fails the license registration and keeps using the old valid token. This failure can only be found from the CSR event log. |
25.0(1c) and later |
|
Redirection and UDR does not take effect when traffic coming through an express route and destined to a service end point is redirected to a native load balancer or firewall. |
25.0(1c) and later |
|
Inter-site VxLAN traffic drops for a given VRF table when it is deleted and re-added. Packet capture on the CSR shows "Incomplete Adjacency" as follows: Punt 1 Count Code Cause 1 10 Incomplete adjacency <<<<<<< Drop 1 Count Code Cause 1 94 Ipv4NoAdj |
25.0(1c) and later |
|
There is complete traffic loss for 180 seconds. |
25.0(1c) and later |
|
Inter region traffic is black-holed after the delete trigger for contracts/filter. It was observed that the TGW entry pointing to the remote region TGW is missing for the destination routes. On further debugging it was found that post delete trigger as part of re-add flow, when a describe call is sent to AWS got a reply with the state of this entry as "active" because of which a new create request is not being sent. |
25.0(1c) and later |
|
Infra VPC subnet route table entry for 0.0.0.0/0 route with TGW attachment as nh, is left as a stale entry upon being undeployed. There is no functional impact. Upon being redeployed, this entry is updated with the correct TGW attachment ID as nh. |
25.0(1c) and later |
|
SSH to a virtual machine's public IP address fails, despite the NSG allowing the traffic inbound. SSH to the private IP address of the virtual machine from within the VNet works. |
25.0(1c) and later |
|
After upgrading Cloud APIC, the Cloud Services Routers will be upgraded in two batches. The even set of CSRs are triggered for upgrade first. After their upgrade is complete and all of the even CSRs are datapathReady, only then the odd set of CSRs will be triggered for upgrade. When even one of the upgrade of the even CSRs fail and they don't become datapathReady, the odd set of CSRs will not be triggered for upgrade. This is the behavior followed to avoid any traffic loss. |
25.0(1c) and later |
|
When Cloud APIC is restart, the VPN connection from a tenant's VNets will get deleted and re-created, one by one. This can be seen in the Azure activity logs. It should not impact traffic, as all connections are not deleted at the same time. |
25.0(1c) and later |
|
When the downgrading from the 5.2(1) release to the 5.0(2) release, traffic loss is expected until all of the CSRs are downgraded back to the 17.1 release. The traffic loss occurs because when the CSRs are getting downgraded to the 17.1 release, the CSR NIC1s will be in the backendPools and traffic from the spokes will still be forwarded to the native load balancer. The traffic gets blackholed until the CSRs get fully programmed with all the configurations in the 17.1 release. |
25.0(1c) and later |
|
Upon downgrading Cloud APIC, VPN connections between Cloud APIC and the cloud (AWS/Azure VPN gateway) will be deleted and re-created, causing traffic loss. Traffic loss is based on how quickly the VPN connections are deleted and re-created in AWS due to AWS throttling. |
25.0(1c) and later |
|
A user who is assigned a large number of security domains may not be able to create other Cisco ACI policies. |
25.0(1c) and later |
|
A user who is assigned a large number of security domains may not be able to create other Cisco ACI policies. |
25.0(1c) and later |
|
When TGW Connect is disabled, traffic loss is observed for about 8 minutes. |
25.0(1c) and later |
|
When delete followed by add operations of tenant or other resources (such as VPCs and contracts) are done within a short span of time, it is possible that the resource deployment in Google Cloud may get out of sync with the configuration on Cisco Cloud APIC. The likelihood of this happening is directly proportionate to the scale of configuration and how quickly the operations are done. We may see resources either not created or not deleted on Google Cloud to match the user configuration on Cisco Cloud APIC. |
25.0(1c) and later |
|
Downgrading Cisco Cloud APIC from release 5.2(1) to 5.1(2) may cause CSRs to not be downgraded. The CSR release for 5.2(1) is 17.3.2, and the CSR version for release 5.1(2) is 17.3.1. After the Cisco Cloud APIC downgrade, the CSR version should be downgraded to 17.3.1, but it will not happen due to this bug. |
25.0(1c) and later |
|
Loss of traffic between a cloud and Cisco ACI On-Premises deployment. |
25.0(1c) and later |
|
After upgrading AWS, infra vPC peering does not get deleted. |
25.0(1c) and later |
|
There is traffic loss after downgrading from 5.2(1) to 5.1(2). |
25.0(1c) and later |
|
There is a loss in SSH connectivity to the Cisco Cloud APIC across reboots. But, after a few minutes, the connection should come back and users will be able to SSH in to the Cisco Cloud APIC again. |
25.0(1c) and later |
|
There is an increase in the connector's memory utilization. All of the CSR workflows rerunning might happen even after the setup is in the steady state. |
25.0(1c) and later |
|
After upgrading the Cisco Cloud APIC, on the TGW route tables, the default route (0.0.0.0/0) does not point to infra VPC attachment or is missing. In this case, traffic intended to get forwarded to the CSR will be dropped or forwarded to an invalid next-hop. |
25.0(1c) and later |
|
There is intersite traffic loss when TGW Connect is enabled. |
25.0(1c) and later |
|
Cloud Intersite traffic is dropped due to the CSR in the cloud site not advertising the EVPN routes. |
25.0(1c) and later |
|
Routes for subnets that are not yet configured in Google Cloud may become visible on an external device. When you configure routes to be advertised to an external device, but don't actually configure subnets in the cloud that you intend to advertise the routes for, those routes are still advertised. Remote router may see routes that are advertised even when the subnets are not yet configured. The traffic will get dropped because the subnets are not actually configured. |
25.0(1c) and later |
|
The cloud VRF egress route table is missing the route for 0.0.0.0/0 via the Internet Gateway (IGW), which leads to issues with ssh for VMs in the cloud VRF. |
25.0(1c) and later |
|
An upgrade to or downgrade from the Cloud APIC 5.2(1g) release to any release while using "Ignore Compatibility Check: no" will fail. The following fault is raised: "The upgrade has an upgrade status of Failed Due to Incompatible Desired Version." |
25.0(1c) and later |
|
When delete followed by add operations of tenant or other resources (such as VPCs and contracts) are done within a short span of time, it is possible that the resource deployment in Google Cloud may get out of sync with the configuration on Cisco Cloud APIC. The likelihood of this happening is directly proportionate to the scale of configuration and how quickly the operations are done. We may see resources either not created or not deleted on Google Cloud to match the user configuration on Cisco Cloud APIC. |
25.0(1c) and later |
Compatibility Information
This section lists the compatibility information for the Cisco Cloud Network Controller software. In addition to the information in this section, see the appropriate Cisco Application Policy Infrastructure Controller Release Notes and Cisco Nexus Dashboard Orchestrator Release Notes for compatibility information for those products.
· Cloud Network Controller release 25.1(1e) is compatible with Cisco Nexus Dashboard Orchestrator, release 4.0(3).
· Cloud Network Controller supports the following AWS regions:
o Asia Pacific (Hong Kong)
o Asia Pacific (Mumbai)
o Asia Pacific (Osaka-Local)
o Asia Pacific (Seoul)
o Asia Pacific (Singapore)
o Asia Pacific (Sydney)
o Asia Pacific (Tokyo)
o AWS GovCloud (US-Gov-West)
o Canada (Central)
o EU (Frankfurt)
o EU (Ireland)
o EU (London)
o EU (Milan)
o EU (Stockholm)
o South America (São Paulo)
o US East (N. Virginia)
o US East (Ohio)
o US West (N. California)
o US West (Oregon)
· Cloud Network Controller supports the following Azure regions:
o Australiacentral
o Australiacentral2
o Australiaeast
o Australiasoutheast
o Brazilsouth
o Canadacentral
o Canadaeast
o Centralindia
o Centralus
o Eastasia
o Eastus
o Eastus2
o Francecentral
o Germanywestcentral
o Japaneast
o Japanwest
o Koreacentral
o Koreasouth
o Northcentralus
o Northeurope
o Norwayeast
o Southafricanorth
o Southcentralus
o Southeastasia
o Southindia
o Switzerlandnorth
o Uaenorth
o Uksouth
o Ukwest
o Westcentralus
o Westeurope
o Westindia
o Westus
o Westus2
· Cloud Network Controller supports the following Azure Government cloud regions:
o US DoD Central
o US DoD East
o US Gov Arizona
o US Gov Texas
o US Gov Virginia
Note: The US Gov Iowa region is not supported in Cisco Cloud Network Controller because Azure has deprecated support for this region.
· Cloud Network Controller supports all Google Cloud regions.
Related Content
See the Cisco Cloud Network Controller page for the documentation.
See the Cisco Application Policy Infrastructure Controller (APIC) page for the verified scability, Cisco Application Policy Infrastructure Controller (APIC), and Cisco Nexus Dashboard Orchestrator (NDO) documentation.
The documentation includes installation, upgrade, configuration, programming, and troubleshooting guides, technical references, release notes, and knowledge base (KB) articles, as well as other documentation. KB articles provide information about a specific use case or a specific topic.
By using the "Choose a topic" and "Choose a document type" fields of the APIC documentation website, you can narrow down the displayed documentation list to make it easier to find the desired document.
Documentation Feedback
To provide technical feedback on this document, or to report an error or omission, send your comments to apic-docfeedback@cisco.com. We appreciate your feedback.
Legal Information
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: http://www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.
© 2021-2022 Cisco Systems, Inc. All rights reserved.