Prerequisites and Guidelines
Before you proceed with deploying the Nexus Dashboard cluster in VMware ESX, you must:
-
Ensure that the ESX form factor supports your scale and services requirements.
Scale and services support and co-hosting vary based on the cluster form factor and the specific services you plan to deploy. You can use the Nexus Dashboard Capacity Planning tool to verify that the virtual form factor satisfies your deployment requirements.
Note
Some services (such as Nexus Dashboard Fabric Controller) may require only a single ESX virtual node for one or more specific use cases. In that case, the capacity planning tool will indicate the requirement and you can simply skip the additional node deployment step in the following section.
However, note that if you have to deploy a mix of App and Data nodes, for example if you plan to deploy Nexus Dashboard Insights or co-host multiple services in the same cluster, you must ensure that the Data nodes are deployed first as the initial cluster's 3 master nodes. Then you can add the App nodes as the
worker
nodes, as described in the Cisco Nexus Dashboard User Guide.
-
Review and complete the general prerequisites described in Deployment Overview and Requirements.
Note that this document describes how to initially deploy the base Nexus Dashboard cluster. If you want to expand an existing cluster with additional nodes (such as
worker
orstandby
), see the "Infrastructure Management" chapter of the Cisco Nexus Dashboard User Guide instead, which is available from the Nexus Dashboard UI or online at Cisco Nexus Dashboard User Guide -
Review and complete any additional prerequisites described in the Release Notes for the services you plan to deploy.
-
When deploying in VMware ESX, you can deploy two types of nodes:
-
Data Node—node profile designed for data-intensive applications, such Nexus Dashboard Insights
-
App Node—node profile designed for non-data-intensive applications, such Nexus Dashboard Orchestrator
Ensure you have enough system resources:
Table 1. Deployment Requirements Nexus Dashboard Version
Data Node Requirements
App Node Requirements
Release 2.1.x
-
VMware ESXi 6.5, 6.7, or 7.0
-
VMware vCenter 6.x, if deploying using vCenter
-
Each VM requires:
-
32 vCPUs
-
128GB of RAM
-
3TB SSD storage for the data volume and an additional 50GB for the system volume
All
Data
nodes must be deployed on SSD or faster storage.
-
-
We recommend that each Nexus Dashboard node is deployed in a different ESXi server.
-
VMware ESXi 6.5, 6.7, or 7.0
-
VMware vCenter 6.x, if deploying using vCenter
-
Each VM requires:
-
16 vCPUs
-
64GB of RAM
-
500GB HDD or SSD storage for the data volume and an additional 50GB for the system volume
Some services require
App
nodes to be deployed on faster SSD storage while other services support HDD. Check the Nexus Dashboard Capacity Planning tool to ensure that you use the correct type of storage.
-
-
We recommend that each Nexus Dashboard node is deployed in a different ESXi server.
-
-
After each node's VM is deployed, ensure that the VMware Tools periodic time synchronization is disabled as described in the deployment procedure in the next section.
-
VMware vMotion is not supported for Nexus Dashboard cluster nodes.
-
VMware Distributed Resource Scheduler (DRS) is not supported for Nexus Dashboard cluster nodes.
-
You can choose to deploy the nodes directly in ESXi or using vCenter.
If you want to deploy using vCenter, following the steps described in Deploying Cisco Nexus Dashboard Using VMware vCenter.
If you want to deploy directly in ESXi, following the steps described in Deploying Cisco Nexus Dashboard Directly in VMware ESXi.
ESX Host Network Connectivity
If you plan to install Nexus Dashboard Insights or Fabric Controller service and use the Persistent IPs feature, you must
ensure that the ESX host where the cluster nodes are deployed has a single logical uplink. In other words, it is connected
via a single link, PC, or vPC and not a dual Active/Active (A/A
) or Active/Standby (A/S
) link without PC/vPC.
The following diagrams summarize the supported and unsupported network connectivity configurations for the ESX host where the nodes are deployed:
-
In case the ESX host is connected directly, the following configurations are supported:
-
A/A
uplinks of Port-Group or virtual switch with PC or vPC -
Single uplink of Port-Group or virtual switch
-
Port-Channel used for the uplink of Port-Group or virtual switch.
A/A
orA/S
uplinks of Port-Group or virtual switch without PC or vPC are not supported -
-
In case the ESX host is connected via a UCS Fabric Interconnect (or equivalent), the following configurations are supported:
-
A/S
uplinks of Port-Group or virtual switch at UCS Fabric Interconnect level without PC or vPCIn this case, the
Active
/Standby
links are based on the server technology, such as Fabric Failover for Cisco UCS and not at the ESXi hypervisor level. -
Single uplink of Port-Group or virtual switch
A/A
orA/S
uplinks of Port-Group or virtual switch at the hypervisor level without PC or vPC are not supported -