Using unmanaged EKS clusters¶
To create your Amazon Elastic Kubernetes Service (EKS) cluster, follow the AWS user guide. We recommend that you allocate at least 15 GB of memory for each cluster node. More memory may be required if you plan on running very large in-memory recipes.
You’ll be able to configure the memory allocation for each container and per-namespace using multiple containerized execution configurations.
Follow the AWS documentation to ensure the following on your local machine (where Dataiku DSS is installed):
aws ecrcommand can list and create docker image repositories and authenticate
dockerfor image push.
kubectlcommand can interact with the cluster.
dockercommand can successfully push images to the ECR repository.
Go to Administration > Settings > Containerized execution, and add a new execution configuration of type “Kubernetes”.
- The image registry URL is the one given by
aws ecr describe-repositories, without the image name. It typically looks like
XXXXXXXXXXXXis your AWS account ID,
us-east-1is the AWS region for the repository and
PREFIXis an optional prefix to triage your repositories.
- Set “Image pre-push hook” to Enable push to ECR.
You’re now ready to run recipes and models on EKS.
AWS provides GPU-enabled instances with NVIDIA GPUs. Using GPUs for containerized execution requires the following steps.
Thereafter, create a new container configuration dedicated to running GPU workloads. If you specified a tag for the base image, report it in the “Base image tag” field.
To execute containers that leverage GPUs, your worker nodes and the control plane must also support GPUs. The following steps describe a simplified way to enable a worker node leverage its GPUs:
- Install the NVIDIA Driver that goes with the model of GPU on the instance.
- Install the Cuda driver. We recommend using the runfile installation method. Note that you do not have to install the cuda toolkit, as the driver alone is sufficient.
- Install the NVIDIA docker runtime and set this runtime as the default docker runtime.
These steps can vary, depending on the underlying hardware and software version requirements for your projects.
Finally, enable the cluster GPU support with the NVIDIA device plugin. Be
careful to select the version that matches your Kubernetes version (
v1.10 as of July 2018).
For your container execution to be located on nodes with GPU accelerators, and for EKS to configure the CUDA driver on your containers, the corresponding EKS pods must be created with a custom “limit” (in Kubernetes parlance). This indicates that you need a specific type of resource (standard resource types are CPU and memory).
You must configure this limit in the containerized execution configuration. To do this:
- In the “Custom limits” section, add a new entry with key:
1(to request 1 GPU).
- Add the new entry and save the settings.