Using managed EKS clusters¶
Initial Setup¶
Install the EKS plugin¶
To use Amazon Elastic Kubernetes Service (EKS), begin by installing the “EKS clusters” plugin from the Plugins store in Dataiku DSS. For more details, see the instructions for installing plugins.
Prepare your local commands¶
Follow the AWS documentation to ensure the following on your local machine (where DSS is installed):
The
aws
command has credentials that give it write access to Amazon Elastic Container Registry (ECR) and full control on EKS.The
aws-iam-authenticator
command is installed. See documentation.The
kubectl
command is installed. See documentation.The
docker
command is installed and can build images. See documentation.
Note
- Cluster management has been tested with the following versions of Kubernetes:
1.23
1.24
1.25
1.26
1.27
1.28
1.29
1.30
1.31
There is no known issue with other Kubernetes versions.
Create base images¶
Build the base image by following these instructions.
Create a new containerized execution configuration¶
Go to Administration > Settings > Containerized execution, and add a new execution configuration of type “Kubernetes”.
The image registry URL is the one given by
aws ecr describe-repositories
, without the image name. It typically looks likeXXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/PREFIX
, whereXXXXXXXXXXXX
is your AWS account ID,us-east-1
is the AWS region for the repository, andPREFIX
is an optional prefix to triage your repositories.Set “Image pre-push hook” to Enable push to ECR.
Cluster configuration¶
Connection¶
The connection is where you define how to connect to AWS. Instead of providing a value here, we recommend that you leave it empty, and use the AWS credentials found by the aws
command in ~/.aws/credentials
.
The connection can be defined either inline in each cluster (not recommended), or as a preset in the plugin’s settings (recommended).
Network settings¶
EKS requires two subnets in the same virtual private cloud (VPC). Your AWS administrator needs to provide you with two subnet identifiers. We strongly recommend that these subnets reside in the same VPC as the DSS host. Otherwise, you have to manually set up some peering and routing between VPCs.
Additionally, you must indicate security group ids. These security groups will be associated with the EKS cluster nodes. The networking requirement is that the DSS machine has full inbound connectivity from the EKS cluster nodes. We recommend that you use the default
security group.
Network settings can be defined either inline in each cluster (not recommended), or as a preset in the plugin’s settings (recommended).
Cluster nodes¶
This setting allows you to define the number and type of nodes in the cluster.
Using GPUs¶
AWS provides GPU-enabled instances with NVidia GPUs. Using GPUs for containerized execution requires the following steps.
Building an image with CUDA support¶
The base image that is built by default does not have CUDA support and cannot use NVidia GPUs.
You need to build a CUDA-enabled base image. To enable CUDA add the --with-cuda
option to the command line:
./bin/dssadmin build-base-image --type container-exec --with-cuda
We recommend that you give this image a specific tag using the --tag
option and keep the default base image “pristine”. We also recommend that you add the DSS version number in the image tag.
./bin/dssadmin build-base-image --type container-exec --with-cuda --tag dataiku-container-exec-base-cuda:X.Y.Z
where X.Y.Z is your DSS version number
Note
This image contains CUDA 10.0 and CuDNN 7.6. You can use
--cuda-version X.Y
to specify another DSS-provided version (9.0, 10.0, 10.1, 10.2, 11.0 and 11.2 are available). If you require other CUDA versions, you would have to create a custom image.Remember that depending on which CUDA version you build the base image (by default 10.0) you will need to use the corresponding tensorflow version.
Warning
After each upgrade of DSS, you must rebuild all base images and update code envs.
Thereafter, create a new container configuration dedicated to running GPU workloads. If you specified a tag for the base image, report it in the “Base image tag” field.
Enable GPU support on the cluster¶
When you create your cluster using the EKS plugin, be sure to select a instance type with a GPU. See EC2 documentation for a full list. You’ll also need to enable the “With GPU” option in the node pool settings.
At cluster creation, the plugin will run the NVidia driver “DaemonSet” installation procedure, which needs several minutes to complete.
Add a custom reservation¶
For your containerized execution task to run on nodes with GPUs, and for EKS to configure the CUDA driver on your containers, the corresponding pods must be created with a custom limit (in Kubernetes parlance). This indicates that you need a specific type of resource (standard resource types are CPU and memory).
You must configure this limit in the containerized execution configuration. To do this:
In the “Custom limits” section, add a new entry with key
nvidia.com/gpu
and value1
(to request 1 GPU).Add the new entry and save your settings.
Deploy¶
You can now deploy your GPU-based recipes and models.