Prerequisites

This chapter provides release-specific prerequisites information for your deployment of Cisco Nexus Dashboard Fabric Controller.

Prerequisites

This section provides detailed information about the prerequisites that you must complete before launching Cisco Nexus Dashboard Fabric Controller.

Nexus Dashboard

You must have Cisco Nexus Dashboard cluster deployed and its fabric connectivity configured, as described in Cisco Nexus Dashboard Deployment Guide before proceeding with any additional requirements and the Nexus Dashboard Fabric Controller service installation described here.


Note


The Fabric Controller service cannot recover from a two master node failure of the Nexus Dashboard cluster where it is deployed. As a result, we recommend that you maintain at least one standby node in your Nexus Dashboard cluster and create regular backups of your NDFC configuration, as described in the Operations > Backup and Restore chapter of the Cisco NDFC-Fabric Controller Configuration Guide for your release.

If you run into a situation where two master nodes of your Nexus Dashboard cluster fail, you can follow the instructions described in the Troubleshooting > Replacing Two Master Nodes with Standby Nodes section of the Cisco Nexus Dashboard User Guide for your release to recover the cluster and NDFC configuration.


NDFC Release

Minimum Nexus Dashboard Release

Release 12.1.2e

Cisco Nexus Dashboard, Release 2.3.1c, 2.3.2b, or 2.3.2d (2.3.2d recommended) or later

The following Nexus Dashboard form factors are supported with NDFC deployments:

  • Cisco Nexus Dashboard physical appliance (.iso)

  • VMware ESX (.ova)

    • ESXi 6.7

    • ESXi 7.0

  • Linux KVM (.qcow2)

    • CentOS 7.9

      RHEL 8.6

  • Existing Red Hat Enterprise Linux (SAN Controller persona only)

    • RedHat Enterprise Linux (RHEL) 8.6

Sizing of the Cluster

Refer to your release-specific Verified Scalability Guide for NDFC for information about the number of Nexus Dashboard cluster nodes required for the desired scale.

Nexus Dashboard supports co-hosting of services. Depending on the type and number of services you choose to run, you may be required to deploy extra worker nodes in your cluster. For cluster sizing information and recommended number of nodes based on specific use cases, see the Cisco Nexus Dashboard Capacity Planning tool.

Network Connectivity

  • LAN Device Management Connectivity – Fabric discovery and Fabric controller features can manage Devices over both Management Network and Data Network of ND Cluster Appliances.

  • When using Management network, add the routes to all subnets of the devices that NDFC needs to manage or monitor in the Management Network.

  • When using Data Network, add the route towards to all subnets of all devices for which POAP is enabled, when using the pre-packaged DHCP server in NDFC for touchless Day-0 device bring-up.

  • SAN controller persona requires all the devices to be reachable via the Data network of Nexus Dashboard cluster nodes.

Persistent IP address

  • Persistent IPs are needed by NDFC for multiple use cases.

  • If Nexus Dashboard cluster is deployed over a Layer 3 separation of network, configure BGP on all ND nodes.

  • All Persistent IPs must be configured such that they are not part of any of the Nexus Dashboard nodes' subnets. This is supported only when LAN Device Management connectivity is Data. This is not supported with a cluster that co-hosts Nexus Dashboard Insights with NDFC.

  • If Nexus Dashboard cluster is deployed with all nodes in the same subnet, persistent IPs can be configured to be from the same subnet.

    In this case, persistent IPs must belong to the network chosen based on LAN Device Management connectivity setting in the NDFC Server Settings.

    For more information, see Persistent IP Requirements for NDFC.

  • Fabric Discovery – 2 IPs based on LAN Device Management Connectivity.

  • Fabric Controller – 2 based on LAN Device Management connectivity and 1 for each EPL fabric instance

  • Fabric Controller with IPFM – 2 based on LAN Device Management connectivity

    • 1 IP for ingest of software Telemetry for a single node IPFM deployment

    • 3 IPs for ingest of software Telemetry for a three node IPFM deployment

  • SAN Controller:

    • SAN Controller 3 Node Cluster – 2 IPs for Data Network + 3 IPs for SAN Insights

    • SAN Controller 1 Node Cluster – 2 IPs for Data Network + 1 IP for SAN Insights

POAP related requirements

  • Devices must support POAP.

  • Device must have no start up configuration or boot poap enable command must be configured to bypass the start up configuration and enter the POAP mode.

  • DHCP server with scope defined. For POAP purposes, either the pre-packaged NDFC DHCP server can be used or an external DHCP server.

  • The script server that stores POAP script and devices’ configuration files must be accessible.

  • Software and Image Repository server must be used to store software images for the devices.

Network Time Protocol (NTP)

Nexus Dashboard nodes must be in synchronization with the NTP Server; however, there can be latency of up to 1 second between the Nexus Dashboard nodes. If the latency is greater than or equal to 1 second between the Nexus Dashboard nodes, this may result in unreliable operations on the NDFC cluster.

Restoring configurations

If this system is to be restored from the previously taken backup, you must upload a backup file that was taken from the same version.

Upgrading to NDFC Release 12.1.2e

  • Upgrading from NDFC Release 12.1.1e

    • Ensure that all the preview/beta features are disabled.

    • Do not proceed to upgrade NDFC if NDFCS service or Nexus Dashboard cluster in not Healthy.

  • Upgrading from NDFC Release 12.0.2f

    • Ensure that all the preview/beta features are disabled.

    • Do not proceed to upgrade NDFC if NDFCS service or Nexus Dashboard cluster in not Healthy.

  • Upgrading from NDFC Release 12.0.1a

    Direct upgrade from Release 12.0.1a to Release 12.1.2e is not supported. You must upgrade to Release 12.0.2f or to Release 12.1.1e before upgrading from Release 12.0.1a to Release 12.1.2e.


    Note


    Upgrading from Release 12.0.1a to Release 12.1.2e makes the system unusable. Also, for NDFC Release 12.0.1a, do not upgrade Nexus Dashboard to Release 2.3.1c.


  • Upgrading from DCNM Release 11.5(4)

    If this system is restored from a 11.5(x) backup, Syslog Trap IP address from the backup may be used for restoring into the new cluster if it is suitable for the subnets of the Nexus Dashboard nodes (only Layer 2 is supported).

    Additional IPs are required based on the requirement as described in Persistent IP Requirements for NDFC.

For detailed information and procedure to upgrade NDFC, see Upgrading Cisco Nexus Dashboard Fabric Controller.