Train and evaluate forecasting models

Use this recipe to train forecasting models and evaluate them on historical data.

Input Data

Historical dataset

Dataset with time series data (one parsed date column, one or more numerical target columns, and optionally one or more time series identifiers columns for wide or long format).

Output Data

Trained model folder

Folder to save trained forecasting models.

Performance metrics dataset

Dataset of forecasting models evaluated on a split of the historical dataset.

Evaluation dataset

Dataset with evaluation forecasts used to compute the performance metrics:

  • This dataset can be used to build charts and visualize your models’ performance

Settings

Input parameters

Time column

Column with parsed dates and no missing values:

  • To parse dates, you can use a Prepare recipe.

  • To fill missing values, you can use the Time Series Preparation Resampling recipe.

Frequency

Frequency of the time column, from year to minute:

  • For minute and hour frequency, you can select the number of minutes or hours.

  • For week frequency, you can select the end-of-week day.

Target column(s)

Time series columns you want to forecast (must be numeric):

  • You can select one (univariate forecasting) or multiple columns (multivariate forecasting).

Long format

Select this option when the dataset contains multiple time series stacked on top of each other (see wide or long format):

  • If selected, you then have to select the columns that identify the multiple time series, with the Time series identifiers parameter.

Sampling

Sampling method

Choose between:

  • Last records (most recent): To only use the last records of each time series during training (the N most recent data)

  • No sampling (whole data): To use all records

Nb. records

Maximum number of records to extract per time series if Last records was selected

Modeling

Forecasting horizon

Number of future values to predict:

  • This number will be reused in the Forecast future values recipe

  • Be careful, high values increase training time.

Forecasting mode

With the following parameter, you can choose to let Dataiku create your models with AutoML modes or have full control over the creation of your models with Expert modes.

You can choose between 4 different forecasting modes:

  • AutoML - Quick prototypes (default): Train baseline models quickly

    • Statistical models: Trivial identity and Seasonal naive are trained

    • Deep Learning models: a FeedForward neural network is trained with 10 epochs of 50 batches with sizes of 32 samples

  • AutoML - High performance: Be patient and get even more accurate models

    • Statistical models: Trivial identity and Seasonal naive are trained

    • Deep Learning models: FeedForward, DeepAR and Transformer are trained with 10 (30 for multivariate) epochs of an automatically adjusted number of batches with sizes of 32 samples

  • Expert - Choose algorithms: Choose which models to train, set the seasonality of the statistical models and tune training parameters of Deep Learning models.

    • Statistical models

      • Season length: Length of the seasonal period (in selected frequency unit) used by statistical models.

        • For example, season length is 7 for daily data with a weekly seasonality (season length is 4 for a 6H frequency with a daily seasonality).

    • Deep Learning training parameters

      • Number of epochs: Number of times the Deep Learning models see the training data.

      • Batch size: Number of samples to include in each batch. A sample is a time window of length 2 x forecasting horizon

      • Scale number of batches: Automatically adjust the number of batches per epoch to the training data size to statistically cover all the training data in each epoch

        • Example: 10 time series of length 10000 will give 209 batches per epoch with a batch size of 32 and a forecasting horizon of 15.

      • Number of batches per epoch: Use this to set a fixed number of batches per epoch to ensure the training time does not increase with the dataset size.

  • Expert - Customize algorithms: Set additional keywords arguments to each algorithm

Evaluation

Split to evaluate performance metrics. The final model will be retrained on the entire sample.

Splitting strategy

Choose between:

  • Time-based Split (Only supported method): Evaluate on the last Forecasting horizon values

Advanced

Add external features

To add numeric features for exogenous time-dependent factors (e.g., holidays, special events).

  • External feature columns:

    • Be careful that future values of external features will be required to forecast.

    • You should only use this parameter for features that you know about in advance, e.g., holidays, special events, promotions.

    • If you have features you would like to include in your models but which you do NOT know about in advance, e.g., the weather, we recommend either:

      • Including these features as Target columns to forecast

      • Using external forecasting data providers, e.g. weather forecasting APIs

    • Note that external features are only usable by AutoARIMA, DeepAR, Transformer, and MQ-CNN algorithms.

Use GPU

If you have installed the GPU version of the plugin, additional GPU-specific parameters will be available. Note that only Deep Learning models can be trained on a GPU. Statistical models are always trained on the CPU.

  • Use GPU: If selected, additional GPU-related parameters can be specified. Else, all models will be trained on the CPU.

    • GPU location: Choose between:

      • Local GPU: If the GPU is on the DSS instance server and the recipe is executed locally

      • Container GPU: If the GPU is in a container and the recipe is executed within this container

        • You can select a container in the Advanced tab of the recipe > Container configuration

        • If the container has multiple GPUs, only the first one will be used

    • Local GPU device: Select one GPU device on the DSS instance server

Note that increasing the Batch size (in Deep Learning training parameters) is a good way to make GPU training much faster than on CPU.