Configuring Infra General Settings

Infra Configuration Dashboard

The Infra Configuration page displays an overview of all sites and inter-site connectivity in your Nexus Dashboard Orchestrator deployment and contains the following information:

Figure 1. Infra Configuration Overview
  1. The General Settings tile displays information about BGP peering type and its configuration.

    This is described in detail in the next section.

  2. The On-Premises tiles display information about every on-premises site that is part of your Multi-Site domain along with their number of Pods and spine switches, OSPF settings, and overlay IPs.

    You can click on the Pods tile that displays the number of Pods in the site to show information about the Overlay Unicast TEP addresses of each Pod.

    This is described in detail in Configuring Infra for Cisco APIC Sites.

  3. The Cloud tiles display information about every cloud site that is part of your Multi-Site domain along with their number of regions and basic site information.

    This is described in detail in Configuring Infra for Cisco Cloud Network Controller Sites.

  4. You can click Show Connectivity Status to display intersite connectivity details for a specific site.

  5. You can use the Configure button to navigate to the intersite connectivity configuration, which is described in detail in the following sections.

The following sections describe the steps necessary to configure the general fabric Infra settings. Fabric-specific requirements and procedures are described in the following chapters based on the specific type of fabric you are managing.

Before you proceed with Infra configuration, you must have configured and added the sites as described in previous sections.

In addition, any infrastructure changes such as adding and removing spine switches or spine node ID changes require a Nexus Dashboard Orchestrator fabric connectivity information refresh described in the Refreshing Site Connectivity Information as part of the general Infra configuration procedures.

Partial Mesh Intersite Connectivity

In addition to full mesh connectivity where you configure intersite connectivity from every site managed by your Nexus Dashboard Orchestrator to every other site, this release also supports partial mesh configuration. In partial mesh configuration, you can manage sites in standalone mode with no intersite connectivity to any other site or limit the intersite configuration to only a subset of other sites in your Multi-Site domain.

Prior to Nexus Dashboard Orchestrator, Release 3.6(1), you could stretch templates between sites and refer to policies from other templates, which were deployed to other sites, even if the intersite connectivity between those sites was not configured, resulting in intended traffic flow between the sites to not work.

Beginning with release 3.6(1), the Orchestrator will allow you to stretch template and remote reference policies from other templates (deployed on other sites) between two or more sites only if the intersite connectivity between those sites is properly configured and deployed.

When configuring site infra for Cisco APIC and Cisco Cloud Network Controller sites as described in the following sections, for each site you can explicitly choose to which other sites infra connectivity will be established and provide that configuration information only.

Partial Mesh Connectivity Guidelines

When configuring partial mesh connectivity, consider the following guidelines:

  • Partial mesh connectivity is supported between two cloud sites or a cloud and on-premises site.

    Full mesh connectivity is automatically established between all on-premises sites.

  • Partial mesh connectivity is supported using BGP-EVPN or BGP-IPv4 protocols.

    Note however that stretching a template is allowed only for sites that are connected using BGP-EVPN protocol. If you are using BGP-IPv4 to connect two or more sites, any template assigned to any of those sites can be deployed to one site only.

Configuring Infra: General Settings

This section describes how to configure general Infra settings for all the sites.


Note


Some of the following settings apply to all sites, while others are required for specific type of sites (for example, Cloud Network Controller sites). Ensure that you complete all the required configurations in infra general settings before proceeding to the site-local settings specific to each site.


Procedure


Step 1

Log in to the Cisco Nexus Dashboard Orchestrator GUI.

Step 2

In the left navigation menu, select Infrastructure > Site Connectivity.

Step 3

In the main pane, click Configure.

Step 4

In the left sidebar, select General Settings.

Step 5

Provide Control Plane Configuration.

  1. Select the Control Plane Configuration tab.

  2. Choose BGP Peering Type.

    • full-mesh—All border gateway switches in each site will establish peer connectivity with remote sites' border gateway switches.

      In full-mesh configuration, Nexus Dashboard Orchestrator uses the spine switches for ACI managed fabrics and border gateways for NDFC managed fabrics.

    • route-reflector—The route-reflector option allows you to specify one or more control-plane nodes to which each site establishes MP-BGP EVPN sessions. The use of route-reflector nodes avoids creating MP-BGP EVPN full mesh adjacencies between all the sites managed by NDO.

      For ACI fabrics, the route-reflector option is effective only for fabrics that are part of the same BGP ASN.

  3. In the Keepalive Interval (Seconds) field, enter the keep alive interval seconds.

    We recommend keeping the default value.

  4. In the Hold Interval (Seconds) field, enter the hold interval seconds.

    We recommend keeping the default value.

  5. In the Stale Interval (Seconds) field, enter stale interval seconds.

    We recommend keeping the default value.

  6. Choose whether you want to turn on the Graceful Helper option.

  7. Provide the Maximum AS Limit.

    We recommend keeping the default value.

  8. Provide the BGP TTL Between Peers.

    We recommend keeping the default value.

  9. Provide the OSPF Area ID.

    If you do not have any Cloud Network Controller sites, this field will not be present in the UI.

    This is OSPF area ID used by cloud sites for on-premises IPN peering.

  10. (Optional) Enable IANA Assigned Port for CloudSec encryption.

    By default, CloudSec uses a proprietary UDP port. This option allows you to configure CloudSec to use the official IANA-reserved port 8017 for CloudSec encryption between sites.

    Note

     

    The IANA-reserved port is supported for Cisco APIC sites running release 5.2(4) or later.

    To change this setting, CloudSec must be disabled on all sites. If you want to enable IANA reserved port, but already have CloudSec encryption enabled for one or more of your sites, disable CloudSec for all sites, enable IANA Reserve UDP Port option, then re-enable CloudSec for the required sites.

    For detailed information and steps for configuring CloudSec, see the "CloudSec Encryption" chapter of the Nexus Dashboard Orchestrator Configuration Guide for ACI Fabrics.

Step 6

Provide the IPN Devices information.

If you do not plan to configure inter-site connectivity between on-premises and cloud sites, you can skip this step.

When you configure inter-site underlay connectivity between on-premises and cloud sites as described in later sections, you will need to select an on-premises IPN device which will establish connectivity to the cloud CSRs. These IPN devices must first be defined here before they are available in the on-premises site configuration screen, which is described in more detail in Configuring Infra: On-Premises Site Settings.

  1. Select the On Premises IPSec Devices tab.

  2. Click +Add On-Premises IPSec Device.

  3. Choose whether the device is Unmanaged or Managed and provide the device information.

    This defines whether or not the device is directly managed by NDFC:

    • For Unmanaged IPN devices, simply provide the Name and the IP Address of the device.

      The IP address you provide will be used as the tunnel peer address from the cloud CSRs, not the IPN device's management IP address.

    • For Managed IPN devices, choose the NDFC Site that contains the device and then the Device from that site.

      Then choose the Interface on the device that is facing the Internet and provide the Next Hop IP address, which is the IP address of the gateway that is connecting to the Internet.

  4. Click the check mark icon to save the device information.

  5. Repeat this step for any additional IPN devices you want to add.

Step 7

Provide the External Devices information.

If you do not have any Cloud Network Controller sites, this tab will not be present in the UI.

If you do not have any Cloud Network Controller sites in your Multi-Site domain or you do not plan to configure connectivity between cloud sites and branch routers or other external devices, you can skip this step.

The following steps describe how to provide information about any branch routers or external devices to which you want to configure connectivity from your cloud sites.

  1. Select the External Devices tab.

    This tab will only be available if you have at least one cloud site in your Multi-Site domain.

  2. Click Add External Device.

    The Add External Device dialogue will open.

  3. Provide the Name, IP Address, and BGP Autonomous System Number for the device.

    The IP address you provide will be used as the tunnel peer address from the Cloud Network Controller's CSRs, not the device's management IP address. The connectivity will be established over public Internet using IPSec.

  4. Click the check mark icon to save the device information.

  5. Repeat this step for any additional IPN devices you want to add.

After you have added all the external devices, ensure to complete the next step to provide the IPSec tunnel subnet pools from with the internal IP addresses will be allocated for these tunnels.

Step 8

Provide the IPSec Tunnel Subnet Pools information.

If you do not have any Cloud Network Controller sites, this tab will not be present in the UI.

There are two types of subnet pools that you can provide here:

  • External Subnet Pool—used for connectivity between cloud site CSRs and other sites (cloud or on-premises).

    These are large global subnet pools that are managed by Nexus Dashboard Orchestrator. The Orchestrator, creates smaller subnets from these pools and allocates them to sites to be used for inter-site IPsec tunnels and external connectivity IPsec tunnels.

    You must provide at least one external subnet pool if you want to enable external connectivity from one or more of your cloud sites.

  • Site-Specific Subnet Pool—used for connectivity between cloud site CSRs and external devices.

    These subnets can be defined when the external connectivity IPsec tunnels must be in a specific range. For example, where a specific subnet is already being used to allocate IP addresses to the external router and you want to continue using those subnets for IPsec tunnels for NDO and cloud sites. These subnets are not managed by the Orchestrator and each subnet is assigned to a site in its entirety to be used locally for external connectivity IPsec tunnels.

    If you do not provide any named subnet pools but still configure connectivity between cloud site's CSRs and external devices, the external subnet pool will be used for IP allocation. .

Note

 

The minimum mask length for both subnet pools is /24.

To add one or more External Subnet Pools:

  1. Select the IPSec Tunnel Subnet Pools tab.

  2. In the External Subnet Pool area, click +Add IP Address to add one or more external subnet pools.

    This subnet will be used to address the IPsec tunnel interfaces and loopbacks of the Cloud Routers used for on-premises connectivity, which you previously configured in the Cloud Network Controller for inter-site connectivity in earlier Nexus Dashboard Orchestrator releases.

    The subnets must not overlap with other on-premises TEP pools, should not begin with 0.x.x.x or 0.0.x.x, and should have a network mask between /16 and /24, for example 30.29.0.0/16.

  3. Click the check mark icon to save the subnet information.

  4. Repeat these substeps for any additional subnet pools you want to add.

To add one or more Site-Specific Subnet Pools:

  1. Select the IPSec Tunnel Subnet Pools tab.

  2. In the Site-Specific Subnet Pools area, click +Add IP Address to add one or more external subnet pools.

    The Add Named Subnet Pool dialogue will open.

  3. Provide the subnet Name.

    You will be able to use the subnet pool's name to choose the pool from which to allocate the IP addresses later on.

  4. Click +Add IP Address to add one or more subnet pools.

    The subnets must have a network mask between /16 and /24and not begin with 0.x.x.x or 0.0.x.x, for example 30.29.0.0/16.

  5. Click the check mark icon to save the subnet information.

    Repeat the steps if you want to add multiple subnets to the same named subnet pool.

  6. Click Save to save the named subnet pool.

  7. Repeat these substeps for any additional named subnet pools you want to add.


What to do next

After you have configured general infra settings, you must still provide additional information for site-specific configurations based on the type of sites (ACI, Cloud Network Controller, or NDFC) you are managing. Follow the instructions described in the following sections to provide site-specific infra configurations.