Create a Worker Node
This section describes how to create a self-managed worker node in the EKS cluster that satisfies all of XRd's requirements on the host operating system.
Before creating a worker node, ensure that the EKS cluster is in ACTIVE
state, and the authentication and networking configuration has been applied as described in the EKS Cluster Configuration section.
This example is inline with the XRd vRouter, and uses m5n.24xlarge
instance with three interfaces:
-
One interface reserved for cluster communication.
-
Two XRd data interfaces.
Prerequisites
-
Find the number of cores on the instance type
To find the number of cores, run the following command:
aws ec2 describe-instance-types \ --instance-type m5.24xlarge \ --query "InstanceTypes[0].VCpuInfo.DefaultCores" \ --output text
This value must be substituted for
<cpu-cores>
in the EC2 run-instances command. -
Create a user data file
Create the user data file by copying the following contents into a file named
worker-user-data.bash
:#!/bin/bash /etc/eks/bootstrap.sh xrd-cluster
For XRd Control Plane, add the following two sysctl settings to the user data file:
echo "fs.inotify.max_user_instances=64000" >> /etc/sysctl.conf echo "fs.inotify.max_user_watches=64000" >> /etc/sysctl.conf
Bring Up the Worker Node
Bring up the worker node by running the following command:
aws ec2 run-instances \
--image-id <xrd-ami-id> \
--count 1 \
--instance-type m5.24xlarge \
--key-name <key-pair-name> \
--block-device-mappings "DeviceName=/dev/xvda,Ebs={VolumeSize=56}" \
--iam-instance-profile "Arn=<node-profile-arn>" \
--network-interfaces "DeleteOnTermination=true,DeviceIndex=0,Groups=<sg-id>, SubnetId=<private-subnet-1>,PrivateIpAddress=10.0.0.10" \
--cpu-options CoreCount=<cpu-cores>,ThreadsPerCore=1 \
--tag-specifications "ResourceType=instance,Tags=[{Key=kubernetes.io/cluster/xrd-cluster,Value=owned}]" \
--user-data file://worker-user-data.bash
Make a note of the instance id, <worker-instance-id>
.
This command brings up an EC2 instance with the following settings:
-
A 56-GB primary partition - required to store any process cores that the XRd generates.
-
A single interface in the first private subnet with permissions to communicate with the EKS control plane. This interface is used for cluster control plane communications. The assigned IP address is 10.0.0.10.
-
One thread per core (SMT or Hyper-Threading turned off). This is to prevent the "noisy neighbor effect" (where processes scheduled on a different logical but same physical core hampers the performance of high priority processes) for the high-performance packet processing threads.
-
A tag that is required by EKS to display the node should be allowed to join the cluster.
-
A user data file that runs the EKS bootstrap script with the cluster name.
The requirements for XRd Control Plane are as follows:
-
You can use a smaller (and cheaper) instance type, for example,
m5.2xlarge
. -
The
--cpu-options
line is not required.
Turn off the source or destination check for the instance, by running the following command:
aws ec2 modify-instance-attribute \
--instance-id <worker-instance-id> \
--no-source-dest-check
When the worker node is up, check if the worker node is added to the cluster.
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-0-10.ec2.internal Ready <none> 1m v1.22.17-eks-48e63af
Note |
If you do not see the worker node, check the EKS configuration steps. |