DSS 8.0 Release notes

Migration notes

Migration paths to DSS 8.0

How to upgrade

It is strongly recommended that you perform a full backup of your DSS data directory prior to starting the upgrade procedure.

For automatic upgrade information, see Upgrading a DSS instance.

Pay attention to the warnings described in Limitations and warnings.

Limitations and warnings

Automatic migration from previous versions (see above) is supported, but there are a few points that need manual attention.

  • The commands to build base images for container execution and API deployer have changed. All base images are now built using options of ./bin/dssadmin build-base-image
  • The legacy “Hadoop 2” standalone packages for Hadoop and Spark integration have been removed. Please use the universal generic-hadoop3 package.

Support removal

Some features that were previously deprecated are now removed or unsupported.

  • Support for Spark 1 (1.6) is removed. We strongly advise you to migrate to Spark 2. All Hadoop distributions can use Spark 2.

Deprecation notice

DSS 8.0 deprecates support for some features and versions. Support for these will be removed in a later release.

  • As a reminder from DSS 7.0, support for “Hive CLI” execution modes for Hive is deprecated and will be removed in a future release. We recommend that you switch to HiveServer2. Please note that “Hive CLI” execution modes are already incompatible with User Isolation Framework.
  • As a reminder from DSS 7.0, Support for Microsoft HDInsight is now deprecated and will be removed in a future release. We recommend that users plan a migration toward a Kubernetes-based infrastructure.
  • As a reminder from DSS 7.0, Support for Machine Learning through Vertica Advanced Analytics is now deprecated and will be removed in a future release. We recommend that you switch to In-memory based machine learning models. In-database scoring of in-memory-trained machine learnings will remain available.
  • As a reminder from DSS 7.0, Support for Hive SequenceFile and RCFile formats is deprecated and will be removed in a future release.
  • As a reminder from DSS 6.0, support for Pig is deprecated. We strongly advise you to migrate to Spark.

Version 8.0.1 - July, 31th, 2020

DSS 8.0.1 is a bugfix release. For a summary of major changes in 8.0, see below

API Node

  • Fixed individual explanations when the model contains a date feature

Recipes

  • Prepare recipe: Fixed autocomplete of column name when using “multiple columns” step mode
  • Prepare recipe: Improved error handling of the “Rename columns” processor when the step has just been created

Flow

  • Fixed display of “File in folder” dataset when using Flow zones
  • Fixed display of “Metrics” dataset when using Flow zones

Notebooks

  • Fixed possible Jupyter hang when User Isolation Framework is enabled

Machine Learning

  • Fixed behaviour of the “Create prediction model” inside an analysis.
  • Fixed display of the AutoML dialog images on chrome
  • Fixed the “View original analysis” button of saved models when the analysis has been deleted
  • Prevent silent failure when clicking on the ‘Lab’ button while user does not have the right user profile

Projects

  • Fixed creation of the DSS Core Designer tutorials
  • Fixed remapping of code environments when importing projects

Charts

  • Fixed quick cropping of line charts on dashboard when data is loading

Webapps

  • Fixed issue on macOS and old versions of Centos

Misc

  • Fixed display of scenario run trigger settings in scenario list
  • Fixed display of managed folder view tab
  • Hide project settings menu for people that are already not allowed to view settings

Version 8.0.0 - July, 15th, 2020

DSS 8.0.0 is a major upgrade to DSS with major new features.

New features

Dataiku Applications

Dataiku Applications allow Dataiku designers to make their projects reusable and consumable by business users. Once a designer has made a project available as an application, business users can create their own instances of the application, set parameters, upload data, run the applications, and directly obtain results.

For more details, please see Dataiku Applications.

Model Document Generation

In regulated industries, data-scientists have to document ML models, at creation and after every change for traceability. This is often tedious. DSS now features the ability to automatically generate a DOCX document from a machine learning model.

Designers can upload their own DOCX template with placeholders that will be automatically be replaced by information, explanations and charts from the ML model. Model Document Generation has an extensive coverage of the advanced result screens of DSS Visual ML, allowing creation of rich documents.

For more details, please see Model Document Generator.

Flow Zones

Data Science projects tend to quickly become complex, with large number of recipes and datasets in the Flow. This can make the Flow complex to read and navigate.

Flow Zones are a completely new way to organize bigger flows into more manageable sub-parts, called zones.

You can now define your zones in the Flow, and assign each dataset,recipe, … to a zone. The zones are automatically laid out in a graph, like super-sized nodes. You can work within a single zone or the whole flow, and collapse zones to create a simplified view of the flow.

For more details, please see Flow zones.

Advanced hyperparameter searching

In addition to the already-existing grid searching for hyperparameters, DSS can now perform Random search and Bayesian search for faster and more thorough search for the best set of hyperparameters.

For more details, please see Advanced models optimization.

Programmatic usage of Row-level-interpretability

DSS 7.0 added support for row-level interpretability for Machine Learning models. This allows you to get a detailed explanation of why a Dataiku model made a given prediction, even when said model is a “black-box” model.

In DSS 7.0, Row-level interpretations were available in the UI, and as the output of the scoring recipe.

DSS 8.0 adds the ability to programmatically obtain explanations through the API node, and also through the Saved Model Python API.

For more details, please see Exposing a visual prediction model.

Application-as-recipes

In addition to their “Visual re-use by business users” usage, Dataiku Applications can also be used to reuse an entire flow as if it was a single recipe. This allows designers to quickly design complex flows while making usage of “building blocks” built by other designers, without having to maintain the complexity of the underlying reused flow.

For more details, please see Application-as-recipe

Support for Pandas 1.0

Dataiku now supports Pandas 1.0 (in addition to maintained support for the legacy 0.23 version).

Support for Pandas 1.0 is only available when using a code env. Pandas 1.0 is only compatible with Python >= 3.6.1, so only code envs using Python 3.6.1 (and above) will get the ability to use Pandas 1.0

Centralization of audit trail

There are multiple use cases for centralizing audit logs from multiple DSS nodes in a single system.

Some of these use cases include:

  • Customers with multiple instances want a centralized audit log in order to grab information like “when did each user last do something”.
  • Customers with multiple instances want a centralized audit log in order to have a global view on the usage of their different audit nodes, and compliance with license
  • Compute Resource Usage reporting capabilities use the audit trail, and make more sense if fully centralized. You may want to cross that information with HR resources, department assignments, …
  • Most MLOps use case require centralized analysis of API node audit logs

DSS now features a complete routing dispatch mechanism for these use cases, with the ability to centralize audit log from multiple machines to a central location, and enhanced capabilities for analyzing audit logs within DSS.

For more details, please see Audit trail.

Centralization of API node query logs

Building on audit log centralization, you can now also centralize API node query logs. This allows you to setup a feedback loop for your ML Ops strategy, in order to analyze the predictions made by the API node, either to detect input data drift or model performance drift.

For more details, please see Configuration for API nodes.

Compute resource usage reporting

DSS acts as the central orchestrator of many computation resources, from SQL databases to Kubernetes. Through DSS, users can leverage these elastic computation resources and consume them. It is thus very important to be able to monitor and report on the usage of computation resources, for total governance and cost control of your Elastic AI stack.

DSS now includes a complete stack for reporting and tagging compute resources. For more details, please see Compute resource usage reporting.

Plugin uninstall

It is now possible to uninstall plugins, both from the UI and API. Trying to uninstall a plugin will automatically warn you if the plugin is still in use.

Public webapps and impersonation in webapps

Two new features reinforce the ability to serve webapps to large number of users:

  • Webapps can now be shared to users who are not DSS users and do not have a DSS account. This allows you to share webapps widely to the whole company. For more details, please see Public webapps
  • Webapp backend code can now perform API calls to the Dataiku API on behalf of the end-user viewing the webapp, with full traceability of the end-user identity. This allows better governance and tracability of actions performed on behalf of users. For more details, please see Webapps and security.

Tag categories

Administrators can now define tag categories. Tag categories allow you to create custom “fields” in the form of tags, and have predefined set of values.

Categorized tags can then be set easily by the end user with validation on the values.

For example, you could create a tag category for the responsible team, one for the department, one for the brand that you’re working on, …

Tag categories can be created and managed by the administrator from Administration > Settings > Tag categories.

Other notable enhancements

Improved Visual ML experience

The Visual ML user experienced has been enhanced to streamline the creation of models and understanding of the Dataiku Lab:

  • Find the Lab associated to each dataset directly from the dataset’s right panel
  • Faster creation of ML models, with streamlined workflow. You can now create a ML model in 3 clicks from a dataset
  • Ability to create ML models directly from a column in the dataset’s Explore view
  • Better explanations in-product for the various cross-validation strategies

New users and authentication management APIs

The API for users and authentication management have been greatly enhanced with:

  • Ability to set user secrets through API, either for end users or admins
  • Ability to set per-user-credentials through API, either for end users or admins
  • Ability to impersonate end-users using admin credentials
  • Ability to manipulate user and admin properties through API

For more details, please see Users and groups and Authentication information and impersonation.

Enhanced programmatic flow building APIs

Many APIs have seen vast improvements, especially regarding the ability to entirely build and control Flows via the API:

And many other, please see Python APIs for a complete index of the Python API.

Enhanced support for container images

All three kinds of container images (containerized execution, Spark-on-Kubernetes and API deployer) are now built on a single CentOS 7 base.

This release brings the following enhancement:

  • Support for CUDA 10.0 and 10.1 in containers
  • Full support for Python-3-only containers
  • Far enhanced customization capabilities, including ability to use a proxy
  • Ability to use prebuilt images for faster images build

For more details, please see Running in containers and Customization of base images.

Experimental support for Openshift

DSS 8.0 adds experimental for Openshift as a Kubernetes runtime

For more details, please see Using Openshift.

Managed Kubernetes namespaces and quotas

DSS can now automatically create Kubernetes namespaces for both containerized execution and Spark-on-Kubernetes. Namespaces can be defined using variable expansion, in order to create namespaces per user/team/project/…

DSS can automatically apply policies to the dynamic namespaces, notably resource quotas (in order to limit the total amount of computation/memory available to a namespace/user/team/project/…) and limit ranges (in order to set default resource control for computations running in the dynamic namespace).

For more details, please see Dynamic namespace management.

Pod tolerations, affinity and node selectors

You can now add custom Kubernetes tolerations, affinity statements or node selectors in order to control more precisely the placement of your pods on Kubernetes.

For more details, please see Dynamic namespace management.

Import notebooks

You can now directly import .ipynb files from DSS UI.

Enhanced API node audit logging

API node audit logging now includes project key / saved model id / saved model version for prediction endpoints.

In addition, you can ask DSS to dump and/or audit the post-enrich data, when using queries enrichments.

For more details, please see Exposing a visual prediction model.

Disabling users

It is now possible to disable users instead of outright deleting them. Disabled users cannot login, cannot run scenarios, and don’t consume licenses.

Disabling/enabling users can be done through the UI and API.

Instance-wide default code env

You can now select a default code env which will be applied by default across all projects.

Instance-wide default containerized execution config

You can now select a default containerized execution config which will be applied by default across all projects.

Improved “Performance” ML heuristics

The “Performance” template for Visual ML now includes updated defaults and heuristics, that will generally result in obtaining better models faster.

Other enhancements and fixes

Flow

  • Fixed wrong value in partitioning “Test dependencies” function
  • Fixed navigation issue with cross-project datasets leading to loss of flow centering
  • Fixed issue when copying a subflow containing HDFS datasets to a new project
  • Fixed icons display issues for plugin recipes
  • Fixed wrongful attempt to write BigQuery datasets when importing a project
  • Project duplication will now only duplicate uploaded datasets by default

Charts

  • Geo scatter plot: Fixed points with no size nor color that were mistakenly going to (0,0)

Plugins

  • Fixed dynamic select widget for custom exporters
  • Python plugin recipes can now accept BigQuery datasets as outputs

Data preparation

  • Fixed issue when removing values from a “Remove rows on value” processor
  • Extract Date components processor: Extracting minutes,seconds and milliseconds can now run in SQL databases

Datasets

  • Fixed SQL dataset sample retrieval with both partitioning and filtering

Elastic AI

  • Fixed support for Kubernetes > 1.16
  • Spark install can now setup better defaults tuned for Kubernetes

Machine Learning

  • Cost matrix gain was added to the list of metrics displayed in the all metrics screen
  • “Max feature proportion” on tree ensemble algorithms is now hyperparameter-searchable
  • PMML export now outputs probabilities and can now use the model-specified threshold
  • API node: Fixed wrongful scoring of rows that were removed by the preparation script
  • Add more parameters to the Isolation Forest algorithm
  • Fixed issues with empty columns with unicode column names
  • Fixed clustering scoring when outliers detection is enabled and dataset to score is very small
  • Code of custom models is now displayed in results

Jupyter notebooks

  • Fixed issue when DSS is installed with base Python 3.6 environment
  • Properly show the Python version in the notebooks list

API deployer

  • Fixed logging settings at the infrastructure level

Collaboration

  • Added ability to duplicate wiki articles
  • Improved Slack integration with Slack Blocks

Scenarios

  • Improved the consistency check step to report more errors

API

  • Enhanced API for project folders - see Project folders
  • Fixed API for pushing container base images

Security

  • Added additional capabilities to restrict data exports. For more details, please see Advanced security options
  • Added ability to prevent users from writing active Web content (webapps, Jupyter notebooks, RMarkdown reports). For more details, please see Main project permissions

Misc

  • Enhanced consistency of all widgets to edit lists of values or list of key/values
  • The Dataiku chat window is now back to appearing only on the homepage by default