Dataiku Documentation
  • Discussions
    • Setup & Configuration
    • Using Dataiku DSS
    • Plugins & Extending Dataiku DSS
    • General Discussion
    • Job Board
    • Community Resources
    • Product Ideas
  • Knowledge
    • Getting Started
    • Knowledge Base
    • Documentation
  • Academy
    • Quick Start Programs
    • Learning Paths
    • Certifications
    • Course Catalog
    • Academy Discussions
  • Community Programs
    • Upcoming User Events
    • Find a User Group
    • Past Events
    • Community Conundrums
    • Dataiku Neurons
    • Banana Data Podcast
  • What's New
  • User's Guide
  • DSS concepts
  • Connecting to data
  • Exploring your data
  • Schemas, storage types and meanings
  • Data preparation
  • Charts
  • Interactive statistics
  • Machine learning
  • The Flow
  • Visual recipes
  • Recipes based on code
  • Code notebooks
  • MLOps
    • Feature Store
    • Models evaluations
      • Concepts
      • Evaluating DSS models
      • Evaluating external models
      • Analyzing evaluation results
      • Automating model evaluations and drift analysis
    • Model Comparisons
    • Drift analysis
    • MLflow Models
    • Experiment Tracking
  • Webapps
  • Code Studios
  • Code reports
  • Dashboards
  • Workspaces
  • Dataiku Applications
  • Working with partitions
  • DSS and SQL
  • DSS and Python
  • DSS and R
  • DSS and Spark
  • Code environments
  • Collaboration
  • Specific Data Processing
  • Time Series
  • Geographic data
  • Text & Natural Language Processing
  • Images
  • Audio
  • Video
  • Automation & Deployment
  • Automation scenarios, metrics, and checks
  • Production deployments and bundles
  • API Node & API Deployer: Real-time APIs
  • Governance
  • APIs
  • Python APIs
  • R API
  • Public REST API
  • Additional APIs
  • Installation & Administration
  • Installing and setting up
  • Elastic AI computation
  • DSS in the cloud
  • DSS and Hadoop
  • Metastore catalog
  • Operating DSS
  • Security
  • User Isolation
  • Other topics
  • Plugins
  • Streaming data
  • Formula language
  • Custom variables expansion
  • Sampling methods
  • Accessibility
  • Troubleshooting
  • Release notes
  • Other Documentation
  • Third-party acknowledgements
Dataiku DSS
You are viewing the documentation for version 11 of DSS.
  • »
  • MLOps »
  • Models evaluations

Models evaluationsΒΆ

Evaluating a machine learning model consists of computing its performance and behavior on a set of data called the Evaluation set. Model evaluations are the cornerstone of MLOps capabilities. They permit Drift analysis, Model Comparisons and automating retraining of models

  • Concepts
    • When training
    • Subsequent evaluations
  • Evaluating DSS models
    • Configuration of the evaluation recipe
      • Labels
      • Sampling
    • Limitations
  • Evaluating external models
    • Configuration of the standalone evaluation recipe
      • Labels
      • Sampling
  • Analyzing evaluation results
    • The evaluations comparison
    • Model Evaluation details
    • Using evaluation labels
  • Automating model evaluations and drift analysis
    • Metrics and Checks
    • Scenarios and feedback loop
    • Feedback loop
Next Previous

© Copyright 2022, Dataiku

Built with Sphinx using a theme provided by Read the Docs.