Setting up (Kubernetes)¶
Many Kubernetes setups will be based on managed Kubernetes clusters handled by your Cloud Provider. DSS provides deep integrations with these, and we recommend that you read our dedicated sections: Using Amazon Elastic Kubernetes Service (EKS), Using Microsoft Azure Kubernetes Service (AKS) and Using Google Kubernetes Engine (GKE)
Dataiku DSS is not responsible for setting up your local Docker daemon
Dataiku DSS is not compatible with podman, the alternative container engine for Redhat 8 / CentOS 8
Dataiku DSS is not compatible with the default setup of OpenShift as a Kubernetes engine.
The prerequisites for running workloads in Kubernetes are:
- You must have an existing Docker daemon. The
dockercommand on the DSS machine must be fully functional and usable by the user running DSS. This includes the permission to build images, and thus access to a Docker socket.
- You must have an image registry, accessible by your Kubernetes cluster.
- The local
dockercommand must have permission to push images to your image registry.
kubectlcommand must be installed on the DSS machine and be usable by the user running DSS.
Before you can deploy to Kubernetes, at least one “base image” must be constructed.
After each upgrade of DSS, you must rebuild all base images
To build the base image, run the following command from the DSS data directory:
After building the base image, you need to create containerized execution configurations.
- In Administration > Settings > Containerized execution, click Add another config to create a new configuration.
- Select Kubernetes and specify your image repository.
- You will then need to push the base image using the eponymous button.
The configurations for containerized execution can be chosen:
- In the project settings — in which case the settings apply by default to all project activities that can run on containers
- In a recipe’s advanced settings
- In the “Execution environment” tab of in-memory machine learning Design screen