Using unmanaged EKS clusters¶
Setup¶
Create your EKS cluster¶
To create your Amazon Elastic Kubernetes Service (EKS) cluster, follow the AWS user guide. We recommend that you allocate at least 15 GB of memory for each cluster node. More memory may be required if you plan on running very large in-memory recipes.
You’ll be able to configure the memory allocation for each container and per-namespace using multiple containerized execution configurations.
Prepare your local aws
, docker
, and kubectl
commands¶
Follow the AWS documentation to ensure the following on your local machine (where Dataiku DSS is installed):
The
aws ecr
command can list and create docker image repositories and authenticatedocker
for image push.The
kubectl
command can interact with the cluster.The
docker
command can successfully push images to the ECR repository.
Note
- Cluster management has been tested with the following versions of Kubernetes:
1.23
1.24
1.25
1.26
1.27
1.28
1.29
1.30
1.31
There is no known issue with other Kubernetes versions.
Create base images¶
Build the base image by following these instructions.
Create a new execution configuration¶
Go to Administration > Settings > Containerized execution, and add a new execution configuration of type “Kubernetes”.
The image registry URL is the one given by
aws ecr describe-repositories
, without the image name. It typically looks likeXXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/PREFIX
, whereXXXXXXXXXXXX
is your AWS account ID,us-east-1
is the AWS region for the repository andPREFIX
is an optional prefix to triage your repositories.Set “Image pre-push hook” to Enable push to ECR.
You’re now ready to run recipes and models on EKS.
Using GPUs¶
AWS provides GPU-enabled instances with NVidia GPUs. Using GPUs for containerized execution requires the following steps.
Building an image with CUDA support¶
The base image that is built by default does not have CUDA support and cannot use NVidia GPUs.
You need to build a CUDA-enabled base image. To enable CUDA add the --with-cuda
option to the command line:
./bin/dssadmin build-base-image --type container-exec --with-cuda
We recommend that you give this image a specific tag using the --tag
option and keep the default base image “pristine”. We also recommend that you add the DSS version number in the image tag.
./bin/dssadmin build-base-image --type container-exec --with-cuda --tag dataiku-container-exec-base-cuda:X.Y.Z
where X.Y.Z is your DSS version number
Note
This image contains CUDA 10.0 and CuDNN 7.6. You can use
--cuda-version X.Y
to specify another DSS-provided version (9.0, 10.0, 10.1, 10.2, 11.0 and 11.2 are available). If you require other CUDA versions, you would have to create a custom image.Remember that depending on which CUDA version you build the base image (by default 10.0) you will need to use the corresponding tensorflow version.
Warning
After each upgrade of DSS, you must rebuild all base images and update code envs.
Thereafter, create a new container configuration dedicated to running GPU workloads. If you specified a tag for the base image, report it in the “Base image tag” field.
Enable GPU support on the cluster¶
To execute containers that leverage GPUs, your worker nodes and the control plane must also support GPUs. The following steps describe a simplified way to enable a worker node leverage its GPUs:
Install the NVidia Driver that goes with the model of GPU on the instance.
Install the Cuda driver. We recommend using the runfile installation method. Note that you do not have to install the cuda toolkit, as the driver alone is sufficient.
Install the NVidia docker runtime and set this runtime as the default docker runtime.
Note
These steps can vary, depending on the underlying hardware and software version requirements for your projects.
Finally, enable the cluster GPU support with the NVidia device plugin. Be
careful to select the version that matches your Kubernetes version (v1.10
as of July 2018).
Add a custom reservation¶
For your container execution to be located on nodes with GPU accelerators, and for EKS to configure the CUDA driver on your containers, the corresponding EKS pods must be created with a custom “limit” (in Kubernetes parlance). This indicates that you need a specific type of resource (standard resource types are CPU and memory).
You must configure this limit in the containerized execution configuration. To do this:
In the “Custom limits” section, add a new entry with key:
nvidia.com/gpu
and value:1
(to request 1 GPU).Add the new entry and save the settings.
Deploy¶
You can now deploy your GPU-required recipes and models.