Installation Requirements and Guidelines
The following sections describe the various requirements for deploying Nexus Dashboard Fabric Controller.
Network Time Protocol (NTP)
Nexus Dashboard nodes must be in synchronization with the NTP Server; however, there can be latency of up to 1 second between the Nexus Dashboard nodes. If the latency is greater than or equal to 1 second between the Nexus Dashboard nodes, this may result in unreliable operations on the NDFC cluster.
IPv4 and IPv6 Support
Prior releases of Nexus Dashboard supported either pure IPv4 or dual stack IPv4/IPv6 (for management network only) configurations for the cluster nodes. Beginning with release 3.0(1), Nexus Dashboard supports pure IPv4, pure IPv6, or dual stack IPv4/IPv6 configurations for the cluster nodes and services.
When defining IP configuration, the following guidelines apply:
-
All nodes and networks in the cluster must have a uniform IP configuration – either pure IPv4, or pure IPv6, or dual stack IPv4/IPv6.
-
If you deploy the cluster in pure IPv4 mode and want to switch to dual stack IPv4/IPv6 or pure IPv6, you must redeploy the cluster.
-
For dual stack configurations:
-
Both external (data and management) and internal (app and services) networks must be in dual stack mode.
Partial configurations, such as IPv4 data network and dual stack management network, are not supported.
-
IPv6 addresses are also required for physical servers' CIMCs.
-
You can configure either IPv4 or IPv6 addresses for the nodes' management network during initial node bring up, but you must provide both types of IPs during the cluster bootstrap workflow.
Management IPs are used to log in to the nodes for the first time to initiate cluster bootstrap process.
-
All internal certificates will be generated to include both IPv4 and IPv6 Subject Alternative Names (SANs).
-
Kubernetes internal core services will start in IPv4 mode.
-
DNS will serve and forward to both IPv4 and IPv6 and server both types of records.
-
VxLAN overlay for peer connectivity will use data network's IPv4 addresses.
Both IPv4 and IPv6 packets are encapsulated within the VxLAN's IPv4 packets.
-
The UI will be accessible on both IPv4 and IPv6 management network addresses.
-
-
For pure IPv6 configurations:
-
Pure IPv6 mode is supported for physical and virtual form factors only.
Clusters deployed in AWS, Azure, or an existing Red Hat Enterprise Linux (RHEL) system do not support pure IPv6 mode.
-
You must provide IPv6 management network addresses when initially configuring the nodes.
After the nodes (physical, virtual, or cloud) are up, these IPs are used to log in to the UI and continue cluster bootstrap process.
-
You must provide IPv6 CIDRs for the internal App and Service networks described above.
-
You must provide IPv6 addresses and gateways for the data and management networks described above.
-
All internal certificates will be generated to include IPv6 Subject Alternative Names (SANs).
-
All internal services will start in IPv6 mode.
-
VxLAN overlay for peer connectivity will use data network's IPv6 addresses.
IPv6 packets are encapsulated within the VxLAN's IPv6 packets.
-
All internal services will use IPv6 addresses.
-
Nexus Dashboard
You must have Cisco Nexus Dashboard cluster deployed and its fabric connectivity configured, as described in Nexus Dashboard Deployment Guide before proceeding with any additional requirements and the Nexus Dashboard Fabric Controller service installation described here.
Note |
The Fabric Controller service cannot recover from a two If you run into a situation where two |
NDFC Release |
Minimum Nexus Dashboard Release |
---|---|
Release 12.1.3 |
Cisco Nexus Dashboard, Release 3.0.1 |
The following Nexus Dashboard form factors are supported with NDFC deployments:
-
Cisco Nexus Dashboard physical appliance (.iso)
-
VMware ESX (.ova)
This release supports ESXi 7.0.
-
Linux KVM (.qcow2)
This release supports CentOS 7.9 and RHEL 8.6.
-
Existing Red Hat Enterprise Linux (SAN Controller persona only)
This release supports Red Hat Enterprise Linux (RHEL) 8.6
Nexus Dashboard Cluster Sizing
Refer to your release-specific Verified Scalability Guide for NDFC for information about the number of Nexus Dashboard cluster nodes required for the desired scale.
Nexus Dashboard supports co-hosting of services. Depending on the type and number of services you choose to run, you may be required to deploy extra worker nodes in your cluster. For cluster sizing information and recommended number of nodes based on specific use cases, see the Cisco Nexus Dashboard Capacity Planning tool.
Nexus Dashboard System Resources
The following table provides information about Server Resource Requirements to run NDFC on top of Nexus Dashboard. Refer to Nexus Dashboard Capacity Planning to determine the number of switches supported for each deployment.
Cisco Nexus Dashboard can be deployed using number of different form factors. NDFC can be deployed on the following form factors:
-
pND - Physical Nexus Dashboard
-
vND - Virtual Nexus Dashboard
-
rND - RHEL Nexus Dashboard
Deployment Type | Node Type | CPUs | Memory | Storage (Throughput: 40-50 MB/s) |
---|---|---|---|---|
Fabric Discovery | Virtual Node (vND) – app node |
16 vCPUs |
64 GB |
550 GB SSD |
Physical Node (pND) (PID: SE-NODE-G2) |
2 x 10-core 2.2GHz Intel Xeon Silver CPU |
256 GB of RAM |
4 x 2.4 TB HDDs 400 GB SSD 1.2 TB NVME drive |
|
Physical Node (pND) (PID: ND-NODE-L4) |
2.8GHz AMD CPU |
256 GB of RAM |
4 x 2.4 TB HDDs 960 GB SSD 1.6 TB NVME drive |
|
Fabric Controller | Virtual Node (vND) – app node | 16 vCPUs | 64 GB | 550 GB SSD |
Physical Node (pND) (PID: SE-NODE-G2) |
2 x 10-core 2.2GHz Intel Xeon Silver CPU |
256 GB of RAM |
4 x 2.4 TB HDDs 400 GB SSD 1.2 TB NVME drive |
|
Physical Node (pND) (PID: ND-NODE-L4) |
2.8GHz AMD CPU |
256 GB of RAM |
4 x 2.4 TB HDDs 960 GB SSD 1.6 TB NVME drive |
|
SAN Controller |
Virtual Node (vND) – app node (with SAN Insights) |
16 vCPUs (with physical reservation) |
64 GB (with physical reservation) |
550 GB SSD |
App Node (rND) (with SAN Insights) |
16 vCPUs (with physical reservation) |
64 GB (with physical reservation) |
550 GB SSD | |
Data Node (vND) – Data node (with SAN Insights) |
32 vCPUs (with physical reservation) |
128GB (with physical reservation) |
3 TB SSD |
|
Data Node (rND) (with SAN Insights) |
32 vCPUs (with physical reservation) |
128 GB (with physical reservation) |
3 TB SSD |
|
Physical Node (pND) (PID: SE-NODE-G2) |
2 x 10-core 2.2GHz Intel Xeon Silver CPU |
256 GB of RAM |
4 x 2.4 TB HDDs 400 GB SSD 1.2 TB NVME drive |
|
Physical Node (pND) (PID: ND-NODE-L4) |
2.8GHz AMD CPU |
256 GB of RAM |
4 x 2.4 TB HDDs 960 GB SSD 1.6 TB NVME drive |
Nexus Dashboard System Resources
The following table provides information about Server Resource Requirements to run NDFC on top of Nexus Dashboard. Refer to Nexus Dashboard Capacity Planning to determine the number of switches supported for each deployment.
Cisco Nexus Dashboard can be deployed using number of different form factors. NDFC can be deployed on the following form factors:
-
pND - Physical Nexus Dashboard
-
vND - Virtual Nexus Dashboard
Deployment Type | Node Type | CPUs | Memory | Storage (Throughput: 40-50 MB/s) |
---|---|---|---|---|
Fabric Discovery | Virtual Node (vND) – app node |
16 vCPUs |
64 GB |
550 GB SSD |
Physical Node (pND) (PID: SE-NODE-G2) |
2 x 10-core 2.2GHz Intel Xeon Silver CPU |
256 GB of RAM |
4 x 2.4 TB HDDs 400 GB SSD 1.2 TB NVME drive |
|
Physical Node (pND) (PID: ND-NODE-L4) |
2.8GHz AMD CPU |
256 GB of RAM |
4 x 2.4 TB HDDs 960 GB SSD 1.6 TB NVME drive |
|
Fabric Controller | Virtual Node (vND) – app node | 16 vCPUs | 64 GB | 550 GB SSD |
Physical Node (pND) (PID: SE-NODE-G2) |
2 x 10-core 2.2GHz Intel Xeon Silver CPU |
256 GB of RAM |
4 x 2.4 TB HDDs 400 GB SSD 1.2 TB NVME drive |
|
Physical Node (pND) (PID: ND-NODE-L4) |
2.8GHz AMD CPU |
256 GB of RAM |
4 x 2.4 TB HDDs 960 GB SSD 1.6 TB NVME drive |
|
SAN Controller |
Virtual Node (vND) – app node (with SAN Insights) |
16 vCPUs (with physical reservation) |
64 GB (with physical reservation) |
550 GB SSD |
Data Node (vND) – Data node (with SAN Insights) |
32 vCPUs (with physical reservation) |
128GB (with physical reservation) |
3 TB SSD |
|
Physical Node (pND) (PID: SE-NODE-G2) |
2 x 10-core 2.2GHz Intel Xeon Silver CPU |
256 GB of RAM |
4 x 2.4 TB HDDs 400 GB SSD 1.2 TB NVME drive |
|
Physical Node (pND) (PID: ND-NODE-L4) |
2.8GHz AMD CPU |
256 GB of RAM |
4 x 2.4 TB HDDs 960 GB SSD 1.6 TB NVME drive |
Nexus Dashboard Networks
When first configuring Nexus Dashboard, on every node, you must provide two IP addresses for the two Nexus Dashboard interfaces—one connected to the Data Network and the other to the Management Network. The data network is typically used for the nodes' clustering and north-south connectivity to the physical network. The management network typically connects to the Cisco Nexus Dashboard Web UI, CLI, or API.
For enabling the Nexus Dashboard Fabric Controller, the number of subnets needed for the Management and Data Interfaces on a Nexus Dashboard node varies, depending on the release:
-
For releases prior to NDFC release 12.1.3, the management and data interfaces on a Nexus Dashboard node must be in different subnets. The External Service Pool IP addresses may come from certain subnet pools, depending on the type of deployment:
-
For LAN deployments, the External Service IPs may come from the Nexus Dashboard management subnet pool or the data subnet pool, depending on the configured settings.
-
For SAN deployments, the External Service IPs come from the Nexus Dashboard data subnet pool.
-
-
For NDFC release 12.1.3, the LAN deployment requirements do not change but SAN deployments now support the management and data interfaces in the same subnet. When the management and data interfaces on a Nexus Dashboard node are in the same subnet, then the External Service Pool IP addresses also come from that same, single subnet.
Note that separate subnets are still supported as before for SAN deployments as well.
For enabling the Nexus Dashboard Fabric Controller, the Management and Data Interfaces on a Nexus Dashboard node must be in different subnets. Different nodes that belong to the same Nexus Dashboard cluster can either be Layer-2 adjacent or Layer-3 adjacent. Refer to Layer 3 Reachability Between Cluster Nodes for more information.
Connectivity between the Nexus Dashboard nodes is required on both networks with the round trip time (RTT) not exceeding 50ms. Other applications running on the same Nexus Dashboard cluster may have lower RTT requirements and you must always use the lowest RTT requirement when deploying multiple applications in the same Nexus Dashboard cluster. Refer to Nexus Dashboard Fabric Controller Deployment Guide for more information.
Nexus Dashboard Fabric Controller Ports
In addition to the ports required by the Nexus Dashboard (ND) cluster nodes, the following ports are required by the Nexus Dashboard Fabric Controller (NDFC) service.
Note |
The following ports apply to the Nexus Dashboard management network and/or data network interfaces depending on which interface provides IP reachability from the NDFC service to the switches. |
Service |
Port |
Protocol |
Direction
|
Connection (Applies to both LAN and SAN deployments, unless stated otherwise) |
||
---|---|---|---|---|---|---|
SSH |
22 |
TCP |
Out |
SSH is a basic mechanism for accessing devices. |
||
SCP |
22 |
TCP |
Out |
SCP clients archiving NDFC backup files to remote server. |
||
SMTP |
25 |
TCP |
Out |
SMTP port is configurable through NDFC's Server Settings menu. This is an optional feature. |
||
DHCP |
67 |
UDP |
In |
If NDFC local DHCP server is configured for Bootstrap/POAP purposes. This applies to LAN deployments only.
|
||
DHCP |
68 |
UDP |
Out |
|||
SNMP |
161 |
TCP/UDP |
Out |
SNMP traffic from NDFC to devices. |
||
HTTPS/HTTP (NX-API) |
443/80 |
TCP |
Out |
NX-API HTTPS/HTTP client connects to device NX-API server on port 443/80, which is also configurable. NX-API is an optional feature, used by limited set of NDFC functions. This applies to LAN deployments only. |
||
HTTPS (vCenter, Kubernetes, OpenStack, Discovery) |
443 |
TCP |
Out |
NDFC provides an integrated host and physical network topology view by correlating the information obtained from registered VMM domains, such as VMware vCenter or OpenStack, as well as container orchestrators, such as Kubernetes. This is an optional feature |
Note |
The following ports apply to the External Service IPs, also known as persistent IPs, used by some of the NDFC services. These External Service IPs may come from the Nexus Dashboard management subnet pool or the data subnet pool depending on the configured settings. |
Service |
Port |
Protocol |
Direction
|
Connection (Applies to both LAN and SAN deployments, unless stated otherwise) |
---|---|---|---|---|
SCP |
22 |
TCP |
In |
SCP is used by various features to transfer files between devices and the NDFC service. The NDFC SCP service serves as the SCP server for both downloads and uploads. SCP is also used by the POAP client on the devices to download POAP-related files. The SCP-POAP service in NDFC has a persistent IP that is associated with either the management or data subnet. This is controlled by the LAN Device Management Connectivity setting in the NDFC Server Settings. |
TFTP (POAP) |
69 |
TCP |
In |
Only used for device zero-touch provisioning via POAP, where devices can send (limited jailed write-only access to NDFC) basic inventory information to NDFC to start secure POAP communication. NDFC Bootstrap or POAP can be configured for TFTP or HTTP/HTTPS. The SCP-POAP service in NDFC has a persistent IP that is associated with either the management or data subnet. This is controlled by the LAN Device Management Connectivity setting in the NDFC Server Settings. This applies to LAN deployments only. |
HTTP (POAP) |
80 |
TCP |
In |
Only used for device zero-touch provisioning via POAP, where devices can send (limited jailed write-only access to NDFC) basic inventory information to NDFC to start secure POAP communication. NDFC Bootstrap or POAP can be configured for TFTP or HTTP/HTTPS. The SCP-POAP service in NDFC has a persistent IP that is associated with either the management or data subnet. This is controlled by the LAN Device Management Connectivity setting in the NDFC Server Settings. This applies to LAN deployments only. |
BGP |
179 |
TCP |
In/Out |
For Endpoint Locator, per fabric where it is enabled, an EPL service is spawned with its own persistent IP. This service is always associated with the Nexus Dashboard data interface. NDFC EPL service peers with the appropriate BGP entity (typically BGP Route-Reflectors) on the fabric to get BGP updates needed to track endpoint information. This feature is only applicable for VXLAN BGP EVPN fabric deployments. This applies to LAN deployments only. |
HTTPS (POAP) |
443 |
TCP |
In |
Secure POAP is accomplished via the NDFC HTTPS Server on port 443. The HTTPS server is bound to the SCP-POAP service and uses the same persistent IP assigned to that pod. The SCP-POAP service in NDFC has a persistent IP that is associated with either the management or data subnet. This is controlled by the LAN Device Management Connectivity setting in the NDFC Server Settings. This applies to LAN deployments only. |
Syslog |
514 |
UDP |
In |
When NDFC is configured as a Syslog server, Syslogs from the devices are sent out toward the persistent IP associated with the SNMP-Trap/Syslog service pod The SNMP-Trap-Syslog service in NDFC has a persistent IP that is associated with either the management or data subnet. This is controlled by the LAN Device Management Connectivity setting in the NDFC Server Settings |
SCP |
2022 |
TCP |
Out |
Transport tech-support file from persistent IP of NDFC POAP-SCP pod to a separate ND cluster running Nexus Dashboard Insights. The SCP-POAP service in NDFC has a persistent IP that is associated with either the management or data subnet. This is controlled by the LAN Device Management Connectivity setting in the NDFC Server Settings |
SNMP Trap |
2162 |
UDP |
In |
SNMP traps from devices to NDFC are sent out toward the persistent IP associated with the SNMP-Trap/Syslog service pod. The SNMP-Trap-Syslog service in NDFC has a persistent IP that is associated with either the management or data subnet. This is controlled by the LAN Device Management Connectivity setting in the NDFC Server Settings |
GRPC (Telemetry) |
33000 |
TCP |
In |
SAN Insights Telemetry Server which receives SAN data (such as storage, hosts, flows, and so on) over GRPC transport tied to NDFC Persistent IP. This is enabled on SAN deployments only. |
GRPC (Telemetry) |
50051 |
TCP |
In |
Information related to multicast flows for IP Fabric for Media deployments as well as PTP for general LAN deployments is streamed out via software telemetry to a persistent IP associated with a NDFC GRPC receiver service pod. This is enabled on LAN and Media deployments only. |
NDFC Latency Requirement
As Cisco Nexus Dashboard Fabric Controller Nexus Dashboard Fabric Controller is deployed in Cisco Nexus Dashboard, the latency factor is dependent on Cisco Nexus Dashboard. Refer to Nexus Dashboard Fabric Controller Deployment Guide for information about latency.
NDFC Network Connectivity
-
LAN Device Management Connectivity – Fabric discovery and Fabric controller features can manage Devices over both Management Network and Data Network of ND Cluster Appliances.
-
When using Management network, add the routes to all subnets of the devices that NDFC needs to manage or monitor in the Management Network.
-
When using Data Network, add the route towards to all subnets of all devices for which POAP is enabled, when using the pre-packaged DHCP server in NDFC for touchless Day-0 device bring-up.
-
SAN controller persona requires all the devices to be reachable via the Data network of Nexus Dashboard cluster nodes.
NDFC Persistent IP address
-
If Nexus Dashboard cluster is deployed over a Layer 3 separation of network, you must configure BGP on all ND nodes.
-
All Persistent IPs must be configured such that they are not part of any of the Nexus Dashboard nodes' subnets. This is supported only when LAN Device Management connectivity is Data. This is not supported with a cluster that co-hosts Nexus Dashboard Insights with NDFC.
-
If Nexus Dashboard cluster is deployed with all nodes in the same subnet, persistent IPs can be configured to be from the same subnet.
In this case, persistent IPs must belong to the network chosen based on LAN Device Management connectivity setting in the NDFC Server Settings.
For more information, see Persistent IP Requirements for NDFC.
Note |
Because this release supports NDFC in pure IPv4, pure IPv6, or dual stack IPv4/IPv6, the following Persistent IP requirements are per IP family. For example, if you have deployed in dual stack mode and the following table states that two IP addresses are required in the management network, that means two IPv4 addresses and two IPv6 addresses. |
Management Interface |
Data Interface |
Persistent IPs |
---|---|---|
Layer 2 adjacent |
Layer 2 adjacent |
When operating in Layer 2 mode with LAN deployment type and LAN Device Management Connectivity set to
When operating in Layer 2 mode with LAN deployment type and LAN Device Management Connectivity set to
For SAN Controller deployment type:
|
Layer 3 adjacent |
Layer 3 adjacent |
When operating in Layer 3 mode with LAN deployment type:
For SAN Controller deployment type:
IP Fabric for Media is not supported in Layer 3 mode |
POAP related requirements
-
Devices must support POAP.
-
Device must have no start up configuration or boot poap enable command must be configured to bypass the start up configuration and enter the POAP mode.
-
DHCP server with scope defined. For POAP purposes, either the pre-packaged NDFC DHCP server can be used or an external DHCP server.
-
The script server that stores POAP script and devices’ configuration files must be accessible.
-
Software and Image Repository server must be used to store software images for the devices.
Web Browsers Compatibility
Cisco Nexus Dashboard Fabric Controller GUI is supported on the following web browsers:
-
Google Chrome version 101.0.4951.64
-
Microsoft Edge version 101.0.1210.47 (64-bit)
-
Mozilla Firefox version 100.0.1 (64-bit)
Other Supported Software
The following table lists the other software that is supported by Cisco Nexus Dashboard Fabric Controller Release 12.1.3.
Component | Features |
---|---|
Security |
|