Using unmanaged GKE clusters¶
To create a Google Kubernetes Engine (GKE) cluster, follow the Google Cloud Platform (GCP) documentation on creating a GKE cluster. We recommend that you allocate at least 16GB of memory for each cluster node. More memory may be required if you plan on running very large in-memory recipes.
You’ll be able to configure the memory allocation for each container and per-namespace in Dataiku DSS using multiple containerized execution configurations.
Follow the GCP documentation to ensure the following on your local machine (where DSS is installed):
gcloudcommand has the appropriate permission and scopes to push to the Google Container Registry (GCR) service.
kubectlcommand is installed and can interact with the cluster. This can be achieved by running the
gcloud container clusters get-credentials your-gke-cluster-namecommand.
dockercommand is installed, can build images and push them to GCR. The latter can be enabled by running the
gcloud auth configure-dockercommand.
Go to Administration > Settings > Containerized execution, and add a new execution configuration of type “Kubernetes”.
- In GCP, there is only a single shared image repository URL,
gcr.io. Access control is based on image names. Therefore the repository URL to use is
- Finish by clicking Push base images.
You’re now ready to run recipes and ML models in GKE.
GCP provides GPU-enabled instances with NVIDIA GPUs. Using GPUs for containerized execution requires the following steps.
Thereafter, create a new container configuration dedicated to running GPU workloads. If you specified a tag for the base image, report it in the “Base image tag” field.
Follow the GCP documentation on how to create a GKE cluster with GPU accelerators. You can also create a GPU-enabled node pool in an existing cluster.
Be sure to run the “DaemonSet” installation procedure, which needs several minutes to complete.
For your containerized execution task to run on nodes with GPUs, and for GKE to configure the CUDA driver on your containers, the corresponding pods must be created with a custom limit (in Kubernetes parlance). This indicates that you need a specific type of resource (standard resource types are CPU and memory).
You must configure this limit in the containerized execution configuration. To do this:
- In the “Custom limits” section, add a new entry with key
1(to request 1 GPU).
- Add the new entry and save your settings.