Cisco Nexus Dashboard Fabric Controller Verified Scalability
Verified Scale Limits for Release 12.1.1e
This section provides verified scalability values for various deployment types for Cisco Nexus Dashboard Fabric Controller, Release 12.1.1e.
The values are validated on testbeds that are enabled with a reasonable number of features and aren’t theoretical system limits for Cisco Nexus Dashboard Fabric Controller software or Cisco Nexus/MDS switch hardware and software. When you try to achieve maximum scalability by scaling multiple features at the same time, results might differ from the values that are listed here.
Nexus Dashboard Server Resource (CPU/Memory) Requirements
The following table provides information about Server Resource (CPU/Memory) Requirements to run NDFC on top of Nexus Dashboard. Refer to Nexus Dashboard Capacity Planning to determine the number of switches supported for each deployment.
Deployment Type | Node Type | CPUs | Memory | Storage (Throughput: 40-50MB/s) | |
---|---|---|---|---|---|
Fabric Discovery | Virtual Node (vND) – app OVA |
16vCPUs |
64GB |
550GB SSD | |
Physical Node (pND) (PID: SE-NODE-G2) |
2x 10-core 2.2G Intel Xeon Silver CPU |
256 GB of RAM |
4x 2.4TB HDDs 400GB SSD 1.2TB NVME drive |
||
Fabric Controller | Virtual Node (vND) – app OVA | 16vCPUs | 64GB | 550GB SSD | |
Physical Node (pND) (PID: SE-NODE-G2) |
2x 10-core 2.2G Intel Xeon Silver CPU |
256 GB of RAM |
4x 2.4TB HDDs 400GB SSD 1.2TB NVME drive |
||
SAN Controller |
Virtual Node (vND) – app OVA (without SAN Insights) |
16vCPUs with physical reservation |
64GB with physical reservation |
550GB SSD | |
Data Node (vND) – Data OVA (with SAN Insights) |
32vCPUs with physical reservation |
128GB with physical reservation |
3TB SSD | ||
Physical Node (pND) (PID: SE-NODE-G2) |
2x 10-core 2.2G Intel Xeon Silver CPU |
256 GB of RAM |
4x 2.4TB HDDs 400GB SSD 1.2TB NVME drive |
||
Virtual Node (vND) Virtual Node (Default Profile on Linux RHEL) |
16vCPUs |
64 GB |
550GB SSD 500GB HDD
|
||
Virtual Node (vND) Virtual Node (Large Profile on Linux RHEL) |
32vCPUs |
128 GB |
3TB |
Scale Limits for Cohosting NDFC and other Services
Profile |
Node Type |
Verified Limit |
---|---|---|
Nexus Dashboard Insights and Nexus Dashboard Fabric Discovery |
4-Node pND (SE) |
50 Switches, 10K Flows |
Nexus Dashboard Insights and Nexus Dashboard Fabric Controller |
5-Node pND (SE) |
50 Switches, 10K Flows |
Scale Limits for NDFC Fabric Discovery
Profile |
Deployment Type |
Verified Limit |
---|---|---|
Fabric Discovery |
1-Node vND (app OVA) |
<= 25 switches (Non-Production) |
Fabric Discovery |
3-Node vND (app OVA) |
150 Switches |
Fabric Discovery |
3-Node pND (SE) |
1000 Switches |
Scale Limits for NDFC Fabric Controller
Profile |
Deployment Type |
Verified Limit |
---|---|---|
Fabric Controller (Non-Production) |
1-Node vND (app OVA) |
<= 25 switches (Non-Production) |
Fabric Controller |
3-Node vND (app OVA) |
80 Switches |
Fabric Controller |
3-Node pND (SE) |
400 Switches |
Fabric Controller |
5-Node vND (app OVA) |
400 Switches |
Description |
Verified Limit |
---|---|
Switches per fabric in NDFC |
150 |
Switches per NDFC instance in managed mode |
400 |
Switches per NDFC instance in monitored mode |
1000 |
Fabrics supported in NDFC per instance |
25 |
Physical Interface per NDFC instance |
30000 |
Description |
Verified Limit |
---|---|
Fabric Underlay Overlay | |
Switches per fabric |
150 |
Overlay Scale for VRFs and Networks |
500 VRFs, 1000 Layer-3 Networks or 1500 Layer-2 Networks |
VRF instances for external connectivity |
500 |
IPAM Integrator application |
150 networks with a total of 4K IP allocations on the Infoblox server |
ToR and Leaf devices |
An Easy fabric can manage both Layer-2 ToRs and VXLAN Leafs. Maximum scale for this fabric is 50 Leaf and 200 ToRs. |
Endpoint Locator | |
Endpoints | 50000 |
Multi-Site Domain |
|
Sites |
12 |
Virtual Machine Manager (VMM) |
|
Virtual Machines (VMs) |
5500 |
VMware Center Servers |
4 |
Kubernetes Visualizer application |
Maximum of 159 namespaces with maxium of 1002 pods |
Note |
Refer to the following table if you are transitioning a Cisco Nexus 9000 Series switches based VXLAN EVPN fabric management to NDFC. Before the migration, your fabric was an NFM managed or CLI configured fabric. |
Description |
Verified Limit |
---|---|
Fabric Underlay and Overlay | |
Switches per fabric | 100 |
Physical Interfaces | 5000 |
VRF instances | 100 |
Overlay networks | 500 |
VRF instances for external connectivity | 100 |
Endpoint Locator | |
Endpoints | 50000 |
IPAM Integrator application | 150 networks with a total of 4K IP allocations on the Infoblox server |
Scale Limits for IPFM Fabrics
Profile |
Deployment Type |
Verified Limit |
---|---|---|
Fabric Controller |
1-Node vND |
35 switches (2 Spines and 33 Leafs) |
Fabric Controller |
3-Node vND |
35 switches (2 Spines and 33 Leafs) |
Fabric Controller |
1-Node pND |
35 switches (2 Spines and 33 Leafs) |
Fabric Controller |
3-Node pND |
80 switches (2 Spines and Leafs) |
Description |
Verified Limit |
||
---|---|---|---|
Switches |
80 |
||
Number of routes |
32000 |
||
Host Policy |
|||
Sender |
8000 |
||
Receiver |
8000 |
||
PIM |
512 |
||
Flow Policy |
2000 |
||
ASM group-range |
20 |
||
NBM Static Flows |
|||
Per switch maximum (receiver leaf where the static OIF will be programmed) mroutes |
1500 |
||
Per fabric maximum mroutes |
8000 |
||
VRFs |
16 |
||
RTP Flow Monitoring with ACL |
|||
ACL |
128 IPv4 ACL entries or 64 IPv6 entries (total 128 TCAM spaces)
|
Scale Limits for NDFC SAN Controller
Description |
Verified Limits |
---|---|
Zone sets |
1000 |
Zone |
16000 |
Profile |
Deployment Type |
Verified Limit |
---|---|---|
SAN Controller |
1-Node vND (app OVA) |
80 Switches, 20K Ports |
SAN Controller |
3-Node vND (app OVA) |
80 Switches, 20K Ports |
SAN Controller on Linux (RHEL) (Install Profile: Default) |
1-Node vND (app OVA) |
80 Switches, 20K Ports |
SAN Controller on Linux (RHEL) (Install Profile: Default) |
3-Node vND (app OVA) |
80 Switches, 20K Ports |
Profile |
Deployment Type |
Verified Limit |
---|---|---|
SAN Controller |
1-Node vND (data OVA) |
120K ITLs/ITNs |
SAN Controller |
1-Node pND (SE) |
120K ITLs/ITNs |
SAN Controller |
3-Node vND (data OVA) |
240K ITLs/ITNs |
SAN Controller |
3-Node pND (SE) |
500K ITLs/ITNs |
SAN Controller on Linux (RHEL) (Install Profile: Large) |
1-Node vND |
120K ITLs/ITNs |
SAN Controller on Linux (RHEL) (Install Profile: Large) |
3-Node vND |
240K ITLs/ITNs |
Note |
ITLs - Initiator-Target-LUNs ITNs - Initiator-Target-Namespace ID |