Configuring High Availability Clusters
Cisco Nexus Dashboard Data Broker supports high availability clustering in active/active mode with up to five controllers. To use high availability clustering with Cisco Nexus Dashboard Data Broker, you must edit the config.ini file for each instance of Cisco Nexus Dashboard Data Broker.
NDB supports the following cluster configurations – 2 nodes, 3 nodes, 5 nodes.
In case of a split-brain scenario, 2-node clusters, and 3-node (and 5-node) clusters are handled as follows:
-
2-node cluster: Cluster health is indicated as Red. To avoid both the NDDB controllers (of the cluster) acting on the devices after split brain, both the NDDB nodes disconnect all devices. You cannot configure using the NDDB GUI. However, the state of the NDDB switch is not affected. To continue using the NDDB GUI, click Yes when you are prompted (pop-up) for an override operation. Ensure to click Yes for an override operation on only one of the clusters; clicking Yes for both the controllers in the clusters will lead to all the switches connected to the controller to an inconsistent state.
-
3-node (and 5-node cluster): Cluster health is indicated as Yellow. At least fifty percent of the configured cluster nodes must be reachable for the cluster to be in an operational state. If not, the cluster nodes will move to a non-operational state; cluster health indicator is displayed as Red. There is no override option for 3 or more nodes cluster. Fix the VM and/or network link, as required.
Note |
IPv6 is supported in centralized Nexus Dashboard Data Broker mode only, it is not supported in Embedded mode. |
Cluster Indicator |
Cluster Status |
Recommendation |
---|---|---|
Green |
Operational |
|
Yellow |
Some of the cluster nodes are not available |
Do not make any changes or add to the existing Nexus Dashboard Data Broker configuration. |
Red |
The node is isolated from the cluster. |
Do not make any changes or add to the existing Nexus Dashboard Data Broker configuration. Note: For two node cluster, you need to override in any one of the cluster node only, to ensure regular operation. |
Before you begin
-
All IP addresses must be reachable and capable of communicating with each other.
-
All switches in the cluster must connect to all of the controllers.
-
All controllers must have the same HA clustering configuration information in the config.ini files.
-
All controllers must have the same information in the ndb/configuration/startup directory.
-
If using cluster passwords, all controllers must have the same password configured in the ndbjgroups.xml file.
Procedure
Step 1 |
Open a command window on one of the instances in the cluster. |
Step 2 |
Navigate to the ndb/configuration directory that was created when you installed the software. |
Step 3 |
Use any text editor to open the config.ini file. |
Step 4 |
Locate the following text:
|
Step 5 |
Example:IPv4 example.
Example:IPv6 example.
|
Step 6 |
Save the file and exit the editor. |
What to do next
(Optional) Use this procedure to configure the delay time for a node and the number of retries.
-
Open a command window on one of the instances in the cluster.
-
Navigate to the ndb configuration directory.
-
Use any text editor to open the ndbjgroups.xml file.
-
Locate the following text:
FD timeout="3000" max_tries="3"/
-
Modify the Latency Time value and maximum_tries value.
-
Save the file and exit the editor.
-
Repeat the above steps for all the instances of the cluster.
Password Protecting High Availability Clusters
Procedure
Step 1 |
Open a command window on one of the instances in the cluster. |
Step 2 |
Navigate to the ndb/configuration directory. |
Step 3 |
Use any text editor to open the ndbjgroups.xml file. |
Step 4 |
Locate the following text:
|
Step 5 |
Remove the comments from the AUTH line. Example:
|
Step 6 |
(Optional) Change the password in the auth_value attribute. |
Step 7 |
Save the file and exit the editor. |