The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
This document describes the features, issues, and deployment guidelines for Cisco Application Centric Infrastructure (ACI) Multi-Site Orchestrator software.
Cisco ACI Multi-Site is an architecture that allows you to interconnect separate Cisco APIC cluster domains (fabrics), each representing a different region. This helps ensure multitenant Layer 2 and Layer 3 network connectivity across sites and extends the policy domain end-to-end across the entire system.
Cisco ACI Multi-Site Orchestrator is the intersite policy manager. It provides single-pane management that enables you to monitor the health of all the interconnected sites. It also allows you to centrally define the intersite policies that can then be pushed to the different Cisco APIC fabrics, which in term deploys them on the physical switches that make up those fabrics. This provides a high degree of control over when and where to deploy those policies.
For more information, see the “Related Content” section of this document.
Note: The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product.
Date |
Description |
May 12, 2021 |
Release 3.1(1m) became available. Additional open issue CSCvw26629 in earlier 3.1(1) releases, which is resolved in 3.1(1m). |
February 24, 2021 |
Updated the “Changes in Behavior” section with full list of supported fabric releases.
|
November 22, 2021 |
Release 3.1(1l) became available. Additional open issues CSCvz84543, CSCvy97158, CSCvy01754 in earlier 3.1(1) releases, which are resolved in 3.1(1l). |
October 26, 2021 |
Corrected the “Fixed in” field for CSCvt23491 from 3.1(1g) to 3.1(1h) to properly reflect the release in which the enhancement is implemented. |
October 21, 2021 |
Additional issue CSCvt23491, which is open in 3.1(1g) and resolved in 3.1(1h). |
August 16, 2021 |
Additional open issues, CSCvy94170, CSCvy99012, CSCvy61486. |
August 9, 2021 |
Additional open issues CSCvy63967, CSCvy95575. |
April 28, 2021 |
Additional open issue CSCvy16381 in Release 3.1(1g), which is resolved in 3.1(1i). |
March 24, 2021 |
Release 3.1(1i) became available. Additional open issues in Release 3.1(1g), which are resolved in 3.1(1i). |
February 26, 2021 |
Release 3.1(1h) became available. Additional open issues in Release 3.1(1g), which are resolved in 3.1(1h). |
December 2, 2020 |
Additional resolved issue CSCvw61549. |
November 28, 2020 |
Release 3.1(1g) became available |
This release adds the following new features:
Feature |
Description |
Native support for additional VRF, BD, and Contract properties |
Additional object properties, which were previously only configurable directly in the sites’ APIC, can now be configured and managed directly from your Multi-Site Orchestrator. For additional information, see Cisco ACI Multi-Site Configuration Guide. |
Redeployment of a full template |
You can now redeploy the entire template without first having to undeploy it. For additional information, see Cisco ACI Multi-Site Configuration Guide. |
Schema visualizer |
The schema overview page now provides a summary of the deployment and a visual representation of the object relationships for the objects defined and deployed to one or more ACI fabrics. For additional information, see Cisco ACI Multi-Site Configuration Guide. |
Site software upgrades for MSO-managed fabrics |
You can manage the ACI fabrics’ firmware and trigger site upgrades (or downgrades) directly from the Multi-Site Orchestrator. For additional information, see Cisco ACI Multi-Site Configuration Guide. |
Enhacements for object import from managed sites |
The following additional object import enhacements are now available:
● “Select All” option for easy import of entire configuration.
● Object references are resolved automatically after the objects are imported.
● External EPGs with the same name can now be imported into different templates.
For additional information, see Cisco ACI Multi-Site Configuration Guide. |
Enhancements for Cloud Sites |
The following additional Cloud ACI enhacements are now available:
● Support for private IP for Cloud APIC and CSR access.
● Support for subnet-based Network Security Groups.
● Contract optimization for Azure Network and Application Security Groups.
● Service chaining and redirect for cloud traffic to load balancer
● Express route UDR for traffic redirect to firewall or load balancer
● Support for inter-tenant contracts
For additional information, see Cisco Cloud APIC documentation. |
Support for multiple DHCP policies on a bridge domain |
Release 3.1(1h) supports assigning multiple DCHP policies to a Bridge Domain (BD). Release 3.1(1g) and earlier support only a single DHCP policy per BD. For additional information, see Cisco ACI Multi-Site Configuration Guide. |
There is no new hardware supported in this release.
The complete list of supported hardware is available in the Cisco ACI Multi-Site Hardware Requirements Guide.
If you are upgrading to this release, you will see the following changes in behavior:
● Release 3.1(1) is the final Multi-Site Orchestrator release using the docker platform and running directly in VMware ESX. All future releases of this software beginning with Release 3.2(1) must be installed in Cisco Nexus Dashboard.
For backward compatibility purposes, the following Cisco APIC and Cisco Cloud APIC releases are supported by this release when using the docker platform:
◦ APIC releases: 5.2(x), 5.1(x), 5.0(x), 4.2(x), 4.1(x), 4.0(x), 3.2(x)
◦ Cloud APIC releases: 5.1(x), 5.0(x), 4.2(x), 4.1(x), 4.0(x), 3.2(x)
● Starting with Release 3.1(1h), you can assign multiple DCHP policies to a Bridge Domain (BD).
Release 3.1(1g) and earlier support only a single DHCP policy per BD.
If you assign multiple DHCP policies in Release 3.1(1h) and then downgrade to a release that supports only a single policy, the first policy in the list will be used and any additional policies will be removed from the BD.
● Starting with Release 3.1(1h), configuration of EPG subnet "querier" flag is no longer supported from MSO UI and API.
After upgrading to Release 3.1(1h), if you enable the "querier" flag on an EPG subnet in APIC, MSO will display configuration drift. In addition, if you deploy the EPG from MSO, the flag will be disabled on the APIC.
If you want to enable the "querier" flag on a subnet, you can do so at the bridge domain level instead of the EPG.
● For all new deployments, we recommend installing Multi-Site Orchestrator in Application Services Engine.
● If your Multi-Site Orchestrator is deployed in Application Services Engine and you are upgrading from Release 3.0(1) or earlier, you must deploy a new Application Services Engine, Release 1.1.3d cluster and migrate your existing configuration.
The procedure is described in detail in Cisco ACI Multi-Site Orchestrator Installation and Upgrade Guide.
● Any shadow objects created by the Multi-Site Orchestrator will now be automatically hidden in each site’s APIC GUI if that site is running Cisco APIC, Release 5.0(2) or later. You can choose to disable this feature in the APIC GUI if you want the objects to be visible.
● Simultaneous updates to Sites, Tenants, and Schemas from multiple Multi-Site Orchestrator GUI sessions will no longer cause some changes to be overwritten or lost. However, the default REST API functionality was left unchanged in order to preserve backward compatibility with existing applications. In other words, while the UI is always enabled for this protection, you must explicitly enable it for your UPDATE and DELETE API calls for MSO to keep track of configuration changes, as described in the Cisco ACI Multi-Site REST API Configuration Guide.
● Starting with Release 2.2(3), additional External EPG subnet flags have been exposed through the Multi-Site Orchestrator GUI.
Prior to Release 2.2(3), only the following subset of external EPG subnet flags available on each site’s APIC was managed by the Multi-Site Orchestrator:
◦ Shared Route Control—configurable in the Orchestrator GUI
◦ Shared Security Import—configurable in the Orchestrator GUI
◦ Aggregate Shared Routes—configurable in the Orchestrator GUI
◦ External Subnets for External EPG—not configurable in the GUI, but always implicitly enabled
Starting with Release 2.2(3), all subnet flags available from the APIC can be configured and managed from the Orchestrator:
◦ Export Route Control
◦ Import Route Control
◦ Shared Route Control
◦ Aggregate Shared Routes
◦ Aggregate Export (enabled for 0.0.0.0 subnet only)
◦ Aggregate Import (enabled for 0.0.0.0 subnet only)
◦ External Subnets for External EPG
◦ Shared Security Import
When upgrading to this release from Release 2.2(2) or earlier, any subnet flags previously unavailable in the Orchestrator GUI will be imported from the APIC and added to the Orchestrator configuration. All imported flags will retain their state (enabled or disabled) with the exception of External Subnets for External EPG, which will remain enabled post-upgrade. If you had previously explicitly disabled the External Subnets for External EPG flag directly in the APIC (for example, in Cloud APIC use case) you will need to disable it again through the Orchestrator GUI.
When downgrading from this release to Release 2.2(2) or earlier, the subnet flags not available in those releases will be cleared and set to disabled in the sites’ APICs. You can then manually enable them directly in each site’s APIC if necessary.
For additional information on these flags, see the Cisco ACI Multi-Site Configuration Guide.
● When upgrading from a release prior to Release 2.2(1), a GUI lockout timer for repeated failed login attempts is automatically enabled by default and is set to 5 login attempts before a lockout with the lockout duration incremented exponentially every additional failed attempt.
● If you configure read-only user roles in Release 2.1(2) or later and then choose to downgrade your Multi-Site Orchestrator to an earlier version where the read-only roles are not supported:
◦ You will need to reconfigure your external authentication servers to the old attribute-value (AV) pair string format. For details, see the "Administrative Operations" chapter in the Cisco ACI Multi-Site Configuration Guide.
◦ The read-only roles will be removed from all users. This also means that any user that has only the read-only roles will have no roles assigned to them and a Power User or User Manager will need to re-assign them new read-write roles.
● Starting with Release 2.1(2), the 'phone number' field is no longer mandatory when creating a new Multi-Site Orchestrator user. However, because the field was required in prior releases, any user created in Release 2.1(2) or later without a phone number provided will be unable to log into the GUI if the Orchestrator is downgraded to Release 2.1(1) or earlier. In this case, a Power User or User Manager will need to provide a phone number for the user.
● If you are upgrading from any release prior to Release 2.1(1), the default password and the minimum password requirements for the Multi-Site Orchestrator GUI have been updated. The default password has been changed from ‘We1come!” to “We1come2msc!” and the new password requirements are:
◦ At least 12 characters
◦ At least 1 letter
◦ At least 1 number
◦ At least 1 special character apart from * and space
You will be prompted to reset your passwords when you:
◦ First install Release 2.1(1) or later
◦ Upgrade to Release 2.1(1) or later from a release prior to Release 2.1(1)
◦ Restore the Multi-Site Orchestrator configuration from a backup
● Starting with Release 2.1(1), Multi-Site Orchestrator encrypts all stored passwords, such as each site’s APIC passwords and the external authentication provider passwords. As a result, if you downgrade to any release prior to Release 2.1(1), you will need to re-enter all the passwords after the Orchestrator downgrade is completed.
To update APIC passwords:
◦ Log in to the Orchestrator after the downgrade.
◦ From the main navigation menu, select Sites.
◦ For each site, edit its properties and re-enter its APIC password.
To update external authentication passwords:
◦ Log in to the Orchestrator after the downgrade.
◦ From the navigation menu, select Admin Providers.
◦ For each authentication provider, edit its properties and re-enter its password.
This section lists the open issues. Click the bug ID to access the Bug Search Tool and see additional information about the bug. The "Exists In" column of the table specifies the 3.1(1) releases in which the bug exists. A bug might also exist in releases other than the 3.1(1) releases.
Bug ID |
Description |
Exists in |
If we have a route-target profile configured under VRF (1), and we would do ANY change on the template and deploy, for example : adding/deleting ANOTHER VRF (2) then Click deploy, This will trigger "deletion" for the route-target profile that exists on the VRF (1) Tested in the lab on Version: 3.0(3i),> deletion triggered and tested the same on Version: 2.2(4e), no deletion triggered |
3.1(1g) |
|
After change the NTP server via svm-msc-tz-ntp, the settings won't take effect until manually disable NTP and reenable NTP, for example the NTP server below remain with 10.66.88.1 while the configured NTP server is actually 10.66.68.1 [root@node1 scripts]# ./svm-msc-tz-ntp -tz Australia/Sydney -ne -ns 10.66.68.1 This does not work as described in the CCO documentation. |
3.1(1g) |
|
After moving an EPG/BD to a new VRF, unexpected route leaking configuration is pushed by MSO and EPGs in the old VRF and new VRF are updated to global pcTags. Additionally, the command "show dcimgr repo sclass-maps" on spine switch shows EPG belongs to two VRFs instead of one which may cause packet dropped during translation. |
3.1(1g) |
|
Template deploy failed when tenant configuration deployed using multiple template in the same schema with vzAny contract. Deploy is consistently failing. Bad Request: Error from APIC: https://IP_ADDRESS, error: child (Rn) of class fvTenant is already attached. dn[(Dn0)] Dn0=, Rn=tn-common. |
3.1(1g) |
|
When installing requirements.txt "python -m pip install -r requirements.txt", below error is displayed: ERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts. We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default. vapi-runtime 2.5.0 requires pyOpenSSL==0.15.1, but you'll have pyopenssl 20.0.1 which is incompatible. |
3.1(1g) |
|
fvSubnet MO scope attribute is modified after a Template without BD definitions gets deleted. |
3.1(1g) |
|
User will see an error message "Bad Request: Duplicate name for different objects in different templates is not allowed" when he try to deploy an ExternalEPG and it's L3Out is in different template. |
3.1(1g) |
|
Stale VRF (not belonging to this site) gets pushed to the site when deploying other templates. |
3.1(1g) |
|
Querier cannot be enabled when L2 stretch for BD is disabled |
3.1(1g) |
|
After upgrading to 3.1.1g, when user trying to import config from local sites, the below error is given on MSO UI. Not Found: APIC https://x.x.x.x not reachable |
3.1(1g) |
|
msc_push_schemas.py not working as expected |
3.1(1g) |
|
|
|
|
Template deployment fails with exception message: |
3.1(1g) and 3.1(1h) |
|
Config drift is shown for EPGs after upgrade or backup restore. |
3.1(1g) and 3.1(1h) |
|
Template deployment fails with exception message: |
3.1(1g) and 3.1(1h) |
|
The following error may be displayed: This error is misleading as the L3Out is already deployed by a different template but the current template doesn't have this L3Out. The Execution Engine service attempts to create it causing the validation code to throws an error. |
3.1(1g) and 3.1(1h) |
|
MSO Release 3.1.1g with cAPIC version 5.1.2g, unable to associate VRF region with VGW. |
3.1(1g) and 3.1(1h) |
|
|
|
|
Importing from a managed site fails when it is reachable through an http proxy |
3.1(1g), 3.1(1h), and 3.1(1i) |
|
Shadow EPG/BDs are not removed when the contract is removed. |
3.1(1g), 3.1(1h), and 3.1(1i) |
|
After performing an upgrade to 3.1(1i), changes to templates are no longer allowed and the following error is raised when trying to deploy changes to sites: "Background Sync is in progress. Please try after a few minutes" |
3.1(1g), 3.1(1h), and 3.1(1i) |
|
|
|
|
Hover over in the Deployment window shows incorrect subnets in the list of modifications for an object. |
3.1(1g), 3.1(1h), 3.1(1i), and 3.1(1l) |
|
|
|
|
When service graphs or devices are created on Cloud APIC by using the API and custom names are specified for AbsTermNodeProv and AbsTermNodeCons, a brownfield import to the Multi-Site Orchestrator will fail. |
3.1(1g) and later |
|
Contract is not created between shadow EPG and on-premises EPG when shared service is configured between Tenants. |
3.1(1g) and later |
|
Inter-site shared service between VRF instances across different tenants will not work, unless the tenant is stretched explicitly to the cloud site with the correct provider credentials. That is, there will be no implicit tenant stretch by Multi-Site Orchestrator. |
3.1(1g) and later |
|
Deployment window may show more policies been modified than the actual config changed by the user in the Schema. |
3.1(1g) and later |
|
Deployment window may not show all the service graph related config values that have been modified. |
3.1(1g) and later |
|
Deployment window may not show all the cloud related config values that have been modified. |
3.1(1g) and later |
|
After brownfield import, the BD subnets are present in site local and not in the common template config |
3.1(1g) and later |
|
In shared services use case, if one VRF has preferred group enabled EPGs and another VRF has vzAny contracts, traffic drop is seen. |
3.1(1g) and later |
|
Let's say APIC has EPGs with some contract relationships. If this EPG and the relationships are imported into MSO and then the relationship was removed and deployed to APIC, MSO doesn't delete the contract relationship on the APIC. |
3.1(1g) and later |
|
fvImportExtRoutes flag is created for VRF even though site1 & site3 external EPGs have provider contract. |
3.1(1g) and later |
|
The REST API call "/api/v1/execute/schema/5e43523f1100007b012b0fcd/template/Template_11?undeploy=all" can fail if the template being deployed has a large object count |
3.1(1g) and later |
|
Shared service traffic drops from external EPG to EPG in case of EPG provider and L3Out vzAny consumer |
3.1(1g) and later |
|
Intersite L3Out traffic is impacted because of missing import RT for VPN routes |
3.1(1g) and later |
|
Site deletion throws error: |
3.1(1g) and later |
|
Routes are not programmed on CSR and the contract config is not pushed to the Cloud site. |
3.1(1g) and later |
|
Unable to add the APIC site with different site ID if it was previously removed MSO. |
3.1(1g) and later |
|
MSO will not update or delete VRF vzAny configuration which was directly created on APIC even though the VRF is managed by MSO. |
3.1(1g) and later |
|
When deploying fabric connectivity between on-premises and cloud sites, you may get a validation error stating that l3extSubnet/cloudTemplateBgpEvpn is already attached. |
3.1(1g) and later |
|
If you are logged into Application Services Engine 1.1.3d UI and MSO UI in different browser tabs, the backup import functionality does not work. This is due to different authorization cookie used for SE and MSO API. |
3.1(1g) and later |
|
In a shared services scenario, stale shadow BD/EPG entries are not cleared on the APIC when Preferred Group and regular contract is removed. |
3.1(1g) and later |
|
Shadow of cloud VRF may be unexpectedly created or deleted on the on-premises site. |
3.1(1g) and later |
|
Two cloud sites (with Private IP for CSRs) with the same InfraVNETPool on both sites can be added to MSO without any infraVNETPool validation. |
3.1(1g) and later |
|
Config drift for BD or VRF after backup restore or MSO upgrade. |
3.1(1g) and later |
|
On backup restore or upgrade, Service Graphs in templates may show a false config drift. |
3.1(1g) and later |
|
Config drift shown for EPGs after upgrade or backup restore |
3.1(1g) and later |
|
Physical domain mapping unexpectedly was removed from multiple EPG |
3.1(1g) and later |
|
MSO removes L3Out-BD association from sites after deleting even an unrelated L3Out in other templates |
3.1(1g) and later |
|
When creating a new EPG in a large schema (for example ~400 EPGs and ~400 BDs), there is a delay before the EPG name is displayed in the text box. |
3.1(1g) and later |
|
Random MSO APIs will return 500 errors for about 20 minutes, while the system is slowly detecting the node outage a relocating services. |
3.1(1g) and later |
|
Some EPGs not shown in Provider list in DHCP Relay Policy creation UI |
3.1(1g) and later |
|
MSO able to login with username only for LDAP users without any password. |
3.1(1g) and later |
|
After migration, deploying a template led to deletion of static ports. |
3.1(1g) and later |
|
Removing EPG objects created from MSO for one site can unexpectedly remove the application profile on the remote site. |
3.1(1g) and later |
This section lists the resolved issues. Click the bug ID to access the Bug Search tool and see additional information about the issue. The "Fixed In" column of the table specifies whether the bug was resolved in the base release or a patch release.
Bug ID |
Description |
Fixed in |
When user has logged into Application Services Engine to perform certain operation followed by accessing MSO application running on the same SE, MSO app API/UI gives 401/403 for some time. |
3.1(1g) |
|
When an admin desires to install a new image of the MSO firmware, this can be done via the firmware section. Images can be added either from a remote web server or from a local file system, e.g. a laptop. An upload of a local image will hang and not complete. |
3.1(1g) |
|
Migrating from vzAny to regular contract impacts traffic in APIC. |
3.1(1g) |
|
When using vzAny contracts, shadow external EPGs are created in all APIC sites even if external EPG is stretched. |
3.1(1g) |
|
Associating a contract to tenant EPGs using MSO causes external EPG or L3out to be removed from APIC. |
3.1(1g) |
|
Shadow of external EPG’s VRF not being properly updated. |
3.1(1g) |
|
MSO shows the L3Outs of another tenant when associating it with a BD. |
3.1(1g) |
|
When you try to upgrade MSO from 2.0(x) to 3.0(1), the upgrade script shows the following errors in the logs: ERROR site 5e5eff4b120000892d98c2dd of templateSite (schema: 5e688b0c110000480b02b3f6 template: Template1) not found in schema! ERROR schema not found for schemaId: 5e66c0911200004f2c6e542e However, the upgrade completes correctly. |
3.1(1g) |
|
MSO-owned VRF exists on APIC when the owner Template on MSO is un-deployed. |
3.1(1g) |
|
If a template with empty AP (cloudApp without any cloudEpgs) is defined and it's undeployed, it deletes the cloudApp. If other templates are defined with same AP name and have cloudEpgs, then as a result of cloudApp deletion, all those cloudEpgs defined in other templates are also deleted. |
3.1(1g) |
|
Traffic between on-premises external EPG (aka InstP) and the cloud EPG is disrupted due to |
3.1(1g) |
|
When BD's subnets are changed, the cloud external EPG doesn’t get the updated cloud selector. |
3.1(1g) |
|
Schema/Template deployment fails when an already added cAPIC site with name-A and with IP address - IP-A is freshly brought up and different IP address is assigned to that cAPIC site. |
3.1(1g) |
|
Creating a contract between on-premises External EPG and stretched EPG impacts other on-premises L3Out External EPG routes. |
3.1(1g) |
|
When you deploy multinode service graph from MSO to AWS Cloud APIC site, the expected service chaining configuration can fail on AWS. In addition, if AWS Cloud APIC API POST allows third-party firewall creation and you select that FW device for multinode graph in MSO, the expected service chaining configuration can fail in Cloud APIC |
3.1(1g) |
|
Traffic may stop for EPGs stretched between on-premises and cloud sites. |
3.1(1g) |
|
Unable to select the site local l3out for a newly created BD from MSO. |
3.1(1g) |
|
|
|
|
Enhancement to add the ability in MSO to configure multiple DHCP relay polices for a BD. |
3.1(1h) |
|
If we have a route-target profile configured under VRF (1), and we would do ANY change on the template and deploy, for example : adding/deleting ANOTHER VRF (2) then Click deploy, This will trigger "deletion" for the route-target profile that exists on the VRF (1) Tested in the lab on Version: 3.0(3i),> deletion triggered and tested the same on Version: 2.2(4e), no deletion triggered |
3.1(1h) |
|
After change the NTP server via svm-msc-tz-ntp, the settings won't take effect until manually disable NTP and reenable NTP, for example the NTP server below remain with 10.66.88.1 while the configured NTP server is actually 10.66.68.1 [root@node1 scripts]# ./svm-msc-tz-ntp -tz Australia/Sydney -ne -ns 10.66.68.1 This does not work as described in the CCO documentation. |
3.1(1h) |
|
After moving an EPG/BD to a new VRF, unexpected route leaking configuration is pushed by MSO and EPGs in the old VRF and new VRF are updated to global pcTags. Additionally, the command "show dcimgr repo sclass-maps" on spine switch shows EPG belongs to two VRFs instead of one which may cause packet dropped during translation. |
3.1(1h) |
|
Template deploy failed when tenant configuration deployed using multiple template in the same schema with vzAny contract. Deploy is consistently failing. Bad Request: Error from APIC: https://IP_ADDRESS, error: child (Rn) of class fvTenant is already attached. dn[(Dn0)] Dn0=, Rn=tn-common. |
3.1(1h) |
|
When installing requirements.txt "python -m pip install -r requirements.txt", below error is displayed: ERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts. We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default. vapi-runtime 2.5.0 requires pyOpenSSL==0.15.1, but you'll have pyopenssl 20.0.1 which is incompatible. |
3.1(1h) |
|
fvSubnet MO scope attribute is modified after a Template without BD definitions gets deleted. |
3.1(1h) |
|
User will see an error message "Bad Request: Duplicate name for different objects in different templates is not allowed" when he try to deploy an ExternalEPG and it's L3Out is in different template. |
3.1(1h) |
|
Stale VRF (not belonging to this site) gets pushed to the site when deploying other templates. |
3.1(1h) |
|
Querier cannot be enabled when L2 stretch for BD is disabled |
3.1(1h) |
|
After upgrading to 3.1.1g, when user trying to import config from local sites, the below error is given on MSO UI. Not Found: APIC https://x.x.x.x not reachable |
3.1(1h) |
|
msc_push_schemas.py not working as expected |
3.1(1h) |
|
|
|
|
Template deployment fails with exception message: |
3.1(1i) |
|
Config drift is shown for EPGs after upgrade or backup restore. |
3.1(1i) |
|
Template deployment fails with exception message: |
3.1(1i) |
|
The following error may be displayed: This error is misleading as the L3Out is already deployed by a different template but the current template doesn't have this L3Out. The Execution Engine service attempts to create it causing the validation code to throws an error. |
3.1(1i) |
|
MSO Release 3.1.1g with cAPIC version 5.1.2g, unable to associate VRF region with VGW. |
3.1(1i) |
|
|
|
|
Importing from a managed site fails when it is reachable through an http proxy |
3.1(1l) |
|
Shadow EPG/BDs are not removed when the contract is removed. |
3.1(1l) |
|
After performing an upgrade to 3.1(1i), changes to templates are no longer allowed and the following error is raised when trying to deploy changes to sites: "Background Sync is in progress. Please try after a few minutes" |
3.1(1l) |
|
|
|
|
Hovering over in the Deployment window shows incorrect subnets in the list of modifications for an object. |
3.1(1m) |
This section lists known behaviors. Click the Bug ID to access the Bug Search Tool and see additional information about the issue.
Bug ID |
Description |
Unable to download Multi-Site Orchestrator report and debug logs when database and server logs are selected |
|
Unicast traffic flow between Remote Leaf Site1 and Remote Leaf in Site2 may be enabled by default. This feature is not officially supported in this release. |
|
After downgrading from 2.1(1), preferred group traffic continues to work. You must disable the preferred group feature before downgrading to an earlier release. |
|
No validation is available for shared services scenarios |
|
The upstream server may time out when enabling audit log streaming |
|
For Cisco ACI Multi-Site, Fabric IDs Must be the Same for All Sites, or the Querier IP address Must be Higher on One Site. The Cisco APIC fabric querier functions have a distributed architecture, where each leaf switch acts as a querier, and packets are flooded. A copy is also replicated to the fabric port. There is an Access Control List (ACL) configured on each TOR to drop this query packet coming from the fabric port. If the source MAC address is the fabric MAC address, unique per fabric, then the MAC address is derived from the fabric-id. The fabric ID is configured by users during initial bring up of a pod site. In the Cisco ACI Multi-Site Stretched BD with Layer 2 Broadcast Extension use case, the query packets from each TOR get to the other sites and should be dropped. If the fabric-id is configured differently on the sites, it is not possible to drop them. To avoid this, configure the fabric IDs the same on each site, or the querier IP address on one of the sites should be higher than on the other sites. |
|
STP and "Flood in Encapsulation" Option are not Supported with Cisco ACI Multi-Site. In Cisco ACI Multi-Site topologies, regardless of whether EPGs are stretched between sites or localized, STP packets do not reach remote sites. Similarly, the "Flood in Encapsulation" option is not supported across sites. In both cases, packets are encapsulated using an FD VNID (fab-encap) of the access VLAN on the ingress TOR. It is a known issue that there is no capability to translate these IDs on the remote sites. |
|
If an infra L3Out that is being managed by Cisco ACI Multi-Site is modified locally in a Cisco APIC, Cisco ACI Multi-Site might delete the objects not managed by Cisco ACI Multi-Site in an L3Out. |
|
"Phone Number" field is required in all releases prior to Release 2.2(1). Users with no phone number specified in Release 2.2(1) or later will not be able to log in to the GUI when Orchestrator is downgraded to a an earlier release. |
This section lists usage guidelines for the Cisco ACI Multi-Site software.
● For all new deployments, we recommend installing Multi-Site Orchestrator, Release 3.0(2j) or later in Application Services Engine.
Multi-Site Orchestrator, Release 3.0(2j) requires Application Services Engine, Release 1.1.3d.
● In Cisco ACI Multi-Site topologies, we recommend that First Hop Routing protocols such as HSRP/VRRP are not stretched across sites.
● HTTP requests are redirected to HTTPS and there is no HTTP support globally or per user basis.
● Up to 12 interconnected sites are supported.
● Proxy ARP glean and unknown unicast flooding are not supported together.
Unknown Unicast Flooding and ARP Glean are not supported together in Cisco ACI Multi-Site across sites.
● Flood in encapsulation is not supported for EPGs and Bridge Domains that are extended across ACI fabrics that are part of the same Multi-Site domain. However, flood in encapsulation is fully supported for EPGs or Bridge Domains that are locally defined in ACI fabrics, even if those fabrics may be configured for Multi-Site.
● The leaf and spine nodes that are part of an ACI fabric do not run Spanning Tree Protocol (STP). STP frames originated from external devices can be forwarded across an ACI fabric (both single Pod and Multi-Pod), but are not forwarded across the inter-site network between sites, even if stretching a BD with BUM traffic enabled.
● GOLF L3Outs for each tenant must be dedicated, not shared.
The inter-site L3Out functionality introduced on MSO release 2.2(1) does not apply when deploying GOLF L3Outs. This means that for a given VRF there is still the requirement of deploying at least one GOLF L3Out per site in order to enable north-south communication. An endpoint connected in a site cannot communicate with resources reachable via a GOLF L3Out connection deployed in a different site.
● While you can create the L3Out objects in the Multi-Site Orchestrator GUI, the physical L3Out configuration (logical nodes, logical interfaces, and so on) must be done directly in each site's APIC.
● VMM and physical domains must be configured in the Cisco APIC GUI at the site and will be imported and associated within the Cisco ACI Multi-Site.
Although domains (VMM and physical) must be configured in Cisco APIC, domain associations can be configured in the Cisco APIC or Cisco ACI Multi-Site.
● Some VMM domain options must be configured in the Cisco APIC GUI.
The following VMM domain options must be configured in the Cisco APIC GUI at the site:
◦ NetFlow/EPG CoS marking in a VMM domain association
◦ Encapsulation mode for an AVS VMM domain
● Some uSeg EPG attribute options must be configured in the Cisco APIC GUI.
The following uSeg EPG attribute options must be configured in the Cisco APIC GUI at the site:
◦ Sub-criteria under uSeg attributes
◦ match-all and match-any criteria under uSeg attributes
● Site IDs must be unique.
In Cisco ACI Multi-Site, site IDs must be unique.
● To change a Cisco APIC fabric ID, you must erase and reconfigure the fabric.
Cisco APIC fabric IDs cannot be changed. To change a Cisco APIC fabric ID, you must erase the fabric configuration and reconfigure it.
However, Cisco ACI Multi-Site supports connecting multiple fabrics with the same fabric ID.
● Caution: When removing a spine switch port from the Cisco ACI Multi-Site infrastructure, perform the following steps:
a. Click Sites.
b. Click Configure Infra.
c. Click the site where the spine switch is located.
d. Click the spine switch.
e. Click the x on the port details.
f. Click Apply.
● Shared services use case: order of importing tenant policies
When deploying a provider site group and a consumer site group for shared services by importing tenant policies, deploy the provider tenant policies before deploying the consumer tenant policies. This enables the relation of the consumer tenant to the provider tenant to be properly formed.
● Caution for shared services use case when importing a tenant and stretching it to other sites
When you import the policies for a consumer tenant and deploy them to multiple sites, including the site where they originated, a new contract is deployed with the same name (different because it is modified by the inter-site relation). To avoid confusion, delete the original contract with the same name on the local site. In the Cisco APIC GUI, the original contract can be distinguished from the contract that is managed by Cisco ACI Multi-Site, because it is not marked with a cloud icon.
● When a contract is established between EPGs in different sites, each EPG and its bridge domain (BD) are mirrored to and appear to be deployed in the other site, while only being actually deployed in its own site. These mirrored objects are known as "shadow” EPGs and BDs.
For example, if one EPG in Site 1 and another EPG in Site 2 have a contract between them, in the Cisco APIC GUI at Site 1 and Site 2, both EPGs will be present. They appear with the same names as the ones that were deployed directly to each site. This is expected behavior and the shadow objects must not be removed. In Cisco APIC releases prior to Release 5.0(2), these objects are always visible. Starting with Cisco APIC, Release 5.0(2), these shadow objects are automatically hidden from view, but are still present in the configuration; you can choose to show these objects by enabling a user setting in each site’s APIC GUI.
For more information, see the Schema Management chapter in the Cisco ACI Multi-Site Configuration Guide.
● Inter-site traffic cannot transit sites.
Site traffic cannot transit sites on the way to another site. For example, when Site 1 routes traffic to Site 3, it cannot be forwarded through Site 2.
● The ? icon in Cisco ACI Multi-Site opens the menu for Show Me How modules, which provide step-by-step help through specific configurations.
◦ If you deviate while in progress of a Show Me How module, you will no longer be able to continue.
◦ You must have IPv4 enabled to use the Show Me How modules.
● User passwords must meet the following criteria:
◦ Minimum length is 8 characters
◦ Maximum length is 64 characters
◦ Fewer than three consecutive repeated characters
◦ At least three of the following character types: lowercase, uppercase, digit, symbol
◦ Cannot be easily guessed
◦ Cannot be the username or the reverse of the username
◦ Cannot be any variation of "cisco", "isco", or any permutation of these characters or variants obtained by changing the capitalization of letters therein
● If you are associating a contract with the external EPG, as provider, choose contracts only from the tenant associated with the external EPG. Do not choose contracts from other tenants. If you are associating the contract to the external EPG, as consumer, you can choose any available contract.
● Policy objects deployed from ACI Multi-Site software should not be modified or deleted from any site-APIC. If any such operation is performed, schemas have to be re-deployed from ACI Multi-Site software.
● The Rogue Endpoint feature can be used within each site of an ACI Multi-Site deployment to help with misconfigurations of servers that cause an endpoint to move within the site. The Rogue Endpoint feaure is not designed for scenarios where the endpoint may move between sites.
This release supports the hardware listed in the Cisco ACI Multi-Site Hardware Requirements Guide.
Multi-Site Orchestrator releases have been decoupled from the APIC releases. The APIC clusters in each site as well as the Orchestrator itself can now be upgraded independently of each other and run in mixed operation mode. For more information, see the Interoperability Support section in the “Infrastructure Management” chapter of the Cisco ACI Multi-Site Orchestrator Installation and Upgrade Guide.
Release 3.1(1) supports Multi-Site Orchestrator deployments in VMware ESX (.ova) and Cisco Application Services Engine (.aci), this release cannot be deployed in Cisco Nexus Dashboard.
For the verified scalability limits, see the Cisco ACI Verified Scalability Guides.
See the Cisco Application Policy Infrastructure Controller (APIC) page for ACI Multi-Site documentation. On that page, you can use the "Choose a topic" and "Choose a document type" fields to narrow down the displayed documentation list and find a desired document.
The documentation includes installation, upgrade, configuration, programming, and troubleshooting guides, technical references, release notes, and knowledge base (KB) articles, and videos. KB articles provide information about a specific use cases or topics. The following tables describe the core Cisco Application Centric Infrastructure Multi-Site documentation.
Document |
Description |
This document. Provides release information for the Cisco ACI Multi-Site Orchestrator product. |
|
Provides basic concepts and capabilities of the Cisco ACI Multi-Site. |
|
Provides the hardware requirements and compatibility. |
|
Describes how to install Cisco ACI Multi-Site Orchestrator and perform day-0 operations. |
|
Describes Cisco ACI Multi-Site configuration options and procedures. |
|
Describes how to use the Cisco ACI Multi-Site REST APIs. |
|
Provides descriptions of common operations issues and Describes how to troubleshoot common Cisco ACI Multi-Site issues. |
|
Contains the maximum verified scalability limits for Cisco Application Centric Infrastructure (Cisco ACI), including Cisco ACI Multi-Site. |
|
Contanis videos that demonstrate how to perform specific tasks in the Cisco ACI Multi-Site. |
To provide technical feedback on this document, or to report an error or omission, send your comments to mailto:apic-docfeedback@cisco.com. We appreciate your feedback.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: http://www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.
© 2020 Cisco Systems, Inc. All rights reserved.