Prediction settings

The “Settings” tab allows you to fully customize all aspects of your prediction.

Target settings

Prediction type

DSS supports three different types of prediction for three different types of targets.

  • Regression is used when the target is numeric (e.g. price of the apartment).
  • Two-class classification is used when the target can be one of two categories (e.g. presence or absence of a doorman).
  • Multi-class classification is used when targets can be one of many categories (e.g. neighborhood of the apartment).

DSS can build predictive models for each of these kinds of learning tasks. Available options, algorithms and result screens will vary depending on the kind of learning task.

Settings: Train / Test set

When training a model, it is important to test the performance of the model on a “test set”. During the training phase, DSS “holds out” on the test set, and the model is only trained on the train set.

Once the model is trained, DSS evaluates its performance on the test set. This ensures that the evaluation is done on data that the model has “never seen before”.

DSS provides two main strategies for conducting this separation between a training and validation set.

Splitting the dataset

By default, DSS randomly splits the input dataset into a training and a testing set. The fraction of data used for training can be specified in DSS. 80% is a standard fraction of data to use for training.

Furthermore, depending on the engine DSS can perform this random split from a subsample of the dataset. This is especially important for in-memory engines, like scikitlearn. DSS defaults to using the first 100‘000 rows of the dataset.

K-fold cross-test

A variant of this method is called “K-Fold cross test”, which DSS can also use. With k-fold cross-test, the dataset is split into n equally sized portions, known as folds. Each fold is independently used as a separate testing set, with the remaining n-1 folds used as a training set. This method strongly increases training time (roughly speaking, it multiplies it by n). However, it allows for two interesting features:

  • It provides a more accurate estimation of model performance, by providing “error margins” on the performance metrics). When K-Fold cross test is enabled, all performance metrics will have tolerance information.
  • Once the scores have been computed on each fold, DSS can retrain the model on 100% of the dataset’s data. This is useful if you don’t have much training data.

In general, use a random split of your dataset if your data is homogeneous.

Explicit extracts

DSS also allows the user to specify explicitly which data to use as the training and testing set. If your data has a known structure, such as apartment prices from two different cities, it may be beneficial to use this structure to specify training and testing sets.

The explicit extracts can either come from a single dataset or from two different datasets. Each extract can be defined using:

  • Filtering rules
  • Sampling rules

Using explicit extracts also allows you to use the output of a Split recipe. The split recipe provides much more control and power on how you can split compared to the builtin random splitting of the Machine Learning component

In general, use an explicit extract of your dataset if your data is heterogeneous.

Note

In “Explicit extracts” mode, since you are providing pre-existing train and test sets, it is not possible to use K-Fold cross-test

Optimization and Evaluation

The model is optimized according to the selected measure. This measure is used for model evaluation in cross-validation (see the Train and validation panel) and hyperparameter grid search (when you specify a list of possible values in an algorithm’s settings).

For Two-class classification problems, the probability threshold for scoring the target class is optimized according to the selected scoring measure.

Settings: Features handling

See ../features_handling

Settings: Feature generation

Note

You can change the settings for feature generation under Models > Settings > Feature generation

DSS can compute interactions between variables, such as linear and polynomial combinations. These generated features allow for linear methods, such as linear regression, to detect non-linear relationship between the variables and the target. These generated features may improve model performance in these cases.

Settings: Feature reduction

Note

You can change the settings for feature reduction under Models > Settings > Feature reduction

Feature reduction operates on the preprocessed features. It allows you to reduce the dimension of the feature space in order to regularize your model or make it more interpretable.

  • Correlation with target: Only the features most correlated (Pearson) with the target will be selected. A threshold for minimum absolute correlation can be set.
  • Tree-based: This will create a Random Forest model to predict the target. Only the top features according to the feature importances computed by the algorithm will be selected.
  • Principal Component Analysis: The feature space dimension will be reduced using Principal Component Analysis. Only the top principal components will be selected. Note: This method will generate non-interpretable feature names as its output. The model may be performant, but will not be interpretable.
  • Lasso regression: This will create a LASSO model to predict the target, using 3-fold cross-validation to select the best value of the regularization term. Only the features with nonzero coefficients will be selected.

Settings: Algorithms

Note

You can change the settings for feature generation under Models > Settings > Algorithms

DSS supports several algorithms that can be used to train predictive models. We recommend trying several different algorithms before deciding on one particular modeling method.

The algorithms depend on each engine. See engines for details

Settings: Metric

You can choose the metric that DSS will use to evaluate models.

This metric will be used to decide which model is the best when doing the hyperparameters optimization.

For display on the test sets, this metric also acts as the main one that will be shown by default, but DSS always computes all metrics, so you can choose another metric to display on the final model (however, if you change the metric, you’re not guaranteed that the hyperparameters are the best one for this new metric)

Threshold optimization

When doing binary classification, most models don’t output a single binary answer, but instead a continuous “probability of being positive”. You then need to select a threshold on this probability, above which DSS will consider the sample as positive.

Optimizing the threshold is always a question of compromise between risking false positive and false negatives.

DSS will compute the true-positive, true-negative, false-positive, false-negative (also known as the confusion matrix) for many values of the threshold and will automatically select the threshold based on the selected metric.

You can also manually set the threshold at any time in the result screens

Misc: GPU support for XGBoost

As of release 0.7, XGBoost supports GPU training and scoring. As of release 4.3, DSS supports this feature.

In practice, to train a gradient boosted trees with XGBoost on a GPU, you need to:

  1. Have CUDA installed on your machine
  2. Have a custom Python code environment
  3. Compile XGBoost against CUDA: http://xgboost.readthedocs.io/en/latest/build.html#building-with-gpu-support
  4. Install the XGBoost python package in your custom code environment: from the command line activate your code environment (source pathtoyourenv/bin/activate then pip install -e pathtoxgboost/python-package)
  5. In DSS visual machine learning for a prediction task, in the “Algorithms” pane, enable the option through the checkbox “Enable GPU acceleration” for XGBoost.