Prediction (Supervised ML)

Prediction (aka supervised machine learning) is used when you have a target variable that you want to predict. For instance, you may want to predict the price of apartments in New York City using the size of the apartments, their location and amenities in the building. In this case, the price of the apartments is the target, while the size of the apartments, their location and the amenities are the features used for prediction.

Note

Our Tutorial 103 provides a step-by-step explanation of how to create your first prediction model and deploy it for scoring of new records.

The rest of this document assumes that you have followed this tutorial.

Running Supervised Machine Learning in DSS

Use the following steps to access supervised machine learning in DSS:

  • Go to the Flow for your project
  • Click on the dataset you want to use
  • Select the Lab
  • Create a new visual analysis
  • Click on the Models tab
  • Select Create first model
  • Select Prediction

Settings: Target settings

Prediction type

DSS supports three different types of prediction for three different types of targets.

  • Regression is used when the target is numeric (e.g. price of the apartment).
  • Two-class classification is used when the target can be one of two categories (e.g. presence or absence of a doorman).
  • Multi-class classification is used when targets can be one of many categories (e.g. neighborhood of the apartment).

DSS can build predictive models for each of these kinds of learning tasks. Available options, algorithms and result screens will vary depending on the kind of learning task.

Settings: Train / Test set

When training a model, it is important to test the performance of the model on a “test set”. During the training phase, DSS “holds out” on the test set, and the model is only trained on the train set.

Once the model is trained, DSS evaluates its performance on the test set. This ensures that the evaluation is done on data that the model has “never seen before”.

DSS provides two main strategies for conducting this separation between a training and validation set.

Splitting the dataset

By default, DSS randomly splits the input dataset into a training and a testing set. The fraction of data used for training can be specified in DSS. 80% is a standard fraction of data to use for training.

Furthermore, depending on the engine DSS can perform this random split from a subsample of the dataset. This is especially important for in-memory engines, like Scikit-learn / XGBoost engine. DSS defaults to using the first 100‘000 rows of the dataset.

K-fold cross-test

A variant of this method is called “K-Fold cross test”, which DSS can also use. With k-fold cross-test, the dataset is split into n equally sized portions, known as folds. Each fold is independently used as a separate testing set, with the remaining n-1 folds used as a training set. This method strongly increases training time (roughly speaking, it multiplies it by n). However, it allows for two interesting features:

  • It provides a more accurate estimation of model performance, by providing “error margins” on the performance metrics). When K-Fold cross test is enabled, all performance metrics will have tolerance information.
  • Once the scores have been computed on each fold, DSS can retrain the model on 100% of the dataset’s data. This is useful if you don’t have much training data.

In general, use a random split of your dataset if your data is homogeneous.

Explicit extracts

DSS also allows the user to specify explicitly which data to use as the training and testing set. If your data has a known structure, such as apartment prices from two different cities, it may be beneficial to use this structure to specify training and testing sets.

The explicit extracts can either come from a single dataset or from two different datasets. Each extract can be defined using:

  • Filtering rules
  • Sampling rules

Using explicit extracts also allows you to use the output of a Split recipe. The split recipe provides much more control and power on how you can split compared to the builtin random splitting of the Machine Learning component

In general, use an explicit extract of your dataset if your data is heterogeneous.

Note

In “Explicit extracts” mode, since you are providing pre-existing train and test sets, it is not possible to use K-Fold cross-test

Optimization and Evaluation

The model is optimized according to the selected measure. This measure is used for model evaluation in cross-validation (see the Train and validation panel) and hyperparameter grid search (when you specify a list of possible values in an algorithm’s settings).

For Two-class classification problems, the probability threshold for scoring the target class is optimized according to the selected scoring measure.

Settings: Feature generation

Note

You can change the settings for feature generation under Models > Settings > Feature generation

DSS can compute interactions between variables, such as linear and polynomial combinations. These generated features allow for linear methods, such as linear regression, to detect non-linear relationship between the variables and the target. These generated features may improve model performance in these cases.

Settings: Feature reduction

Note

You can change the settings for feature reduction under Models > Settings > Feature reduction

Feature reduction operates on the preprocessed features. It allows you to reduce the dimension of the feature space in order to regularize your model or make it more interpretable.

  • Correlation with target: Only the features most correlated (Pearson) with the target will be selected. A threshold for minimum absolute correlation can be set.
  • Tree-based: This will create a Random Forest model to predict the target. Only the top features according to the feature importances computed by the algorithm will be selected.
  • Principal Component Analysis: The feature space dimension will be reduced using Principal Component Analysis. Only the top principal components will be selected. Note: This method will generate non-interpretable feature names as its output. The model may be performant, but will not be interpretable.
  • Lasso regression: This will create a LASSO model to predict the target, using 3-fold cross-validation to select the best value of the regularization term. Only the features with nonzero coefficients will be selected.

Settings: Algorithms

Note

You can change the settings for feature generation under Models > Settings > Algorithms

DSS supports several algorithms that can be used to train predictive models. We recommend trying several different algorithms before deciding on one particular modeling method.

The algorithms depend on each engine. See Machine learning training engines for details

Settings: Hyperparameters optimization

Optimizing hyper-parameters

Each machine learning algorithm has some settings, called hyper-parameters.

For each algorithm that you select in DSS, you can ask DSS to explore several values for each parameter. For example, for a regression algorithm, you can try several values of the regularization parameter.

DSS will automatically try each specified value and only keep the best one. This process is the optimization of hyper-parameters, or “grid search”.

In order to decide which parameter is the best, DSS resplits the train set and extracts a “cross validation” set. It then repeatedly trains on train set minus cross-validation set, and then verifies how the model performed on the cross-validation set.

During this optimization of hyper-parameters, DSS never uses the test set, which must remain “pristine” for final evaluation of the model quality.

Search parameters

You can tune the following parameters

Cross-validation parameters

There are several strategies for selecting the cross-validation set.

Simple split cross validation

With this method, the training set is split into a “real training” and a “cross-validation” set. For each value of each hyperparameter, DSS trains the model and computes the evaluation metric, keeping the value of the hyperparameter that provides the best evaluation metric.

The obvious drawback of this method is that restricts further the size of the data on which DSS truly trains. Also, this method comes with some uncertainty, linked to the characteristics of the split.

K-Fold cross validation

With this method, the training set is split into n equally sized portions, known as folds. For each value of the parameter and each fold, DSS trains the model on n-1 folds and computes the evaluation metric on the last one. For each value of the hyperparameter, DSS keeps the average on all folds. DSS keeps the value of the hyperparameter that provides the best evaluation metric and then retrains the model with this hyperpameter value on the whole training set.

This method increases the training time (roughtly by n) but allows to train on the whole training set (and also decreases the uncertainty since it provides several values for the metric)

Note

K-Fold cross validation is a way to optimize hyper parameters on a cross-validation set.

Not to be confused with K-Fold cross test, which is used to evaluate error margins on the final scores by using the test set.

Custom

Note

This only applies to the “Python in-memory” training engine

If you are using scikit-learn or XGBoost, you can provide a custom cross-validation object. This object must follow the protocol for cross-validation objects of scikit-Learn.

See: http://scikit-learn.org/stable/modules/cross_validation.html

Visualization of grid search results

If you have selected several hyperparameters for DSS to test, during training, DSS will show a graph of the evolution of the best cross-validation scores found so far. DSS only shows the best score found so far, so the graph will show “ever-improving” results, even though the latest evaluated model might not be good. If you hover over one of the points, you’ll see the evolution of hyperparameter values that yielded an improvement.

In the right part of the charts, you see final test scores for completed models (models for which the grid-search phase is done)

The timing that you see as X axis represents time spent training this particular algorithm. DSS does not train all algorithms at once, but each algorithm will have a 0-starting X axis.

Once

Note

The scores that you are seeing in the left part of the chart are cross-validation scores on the cross-validation set. They cannot be directly compared to the test scores that you see in the right part.

  • They are not computed on the same data set
  • They are not computed with the same model (after grid-search, DSS retrains the model on the whole train set)
../_images/cross-val-chart.png

In this example:

  • Even though XGBoost was better than Random Forest in the cross-validation set, ultimately on the test set (once trained on the whole dataset), Random forest won (this might indicate that the RF didn’t have enough data once the cross-validation set was out)
  • The ANN scored 0.83 on the cross-validation set, but its final score on the test set was slightly lower at 0.812

In a model

Once a model is done training, you can also view the impact of each individual hyperparameter value on the final score and training time. This information is displayed both as a graph and a data table that you can export.

../_images/model-gridsearch-results.png

Settings: metric

You can choose the metric that DSS will use to evaluate models.

This metric will be used to decide which model is the best when doing the hyperparameters optimization.

For display on the test sets, this metric also acts as the main one that will be shown by default, but DSS always computes all metrics, so you can choose another metric to display on the final model (however, if you change the metric, you’re not guaranteed that the hyperparameters are the best one for this new metric)

Threshold optimization

When doing binary classification, most models don’t output a single binary answer, but instead a continuous “probability of being positive”. You then need to select a threshold on this probability, above which DSS will consider the sample as positive.

Optimizing the threshold is always a question of compromise between risking false positive and false negatives.

DSS will compute the true-positive, true-negative, false-positive, false-negative (also known as the confusion matrix) for many values of the threshold and will automatically select the threshold based on the selected metric.

You can also manually set the threshold at any time in the result screns

Limitations

Multiclass classification

DSS cannot handle large number of classes. We recommend that you do not try to use machine learning with more than about 50 classes.

You must ensure that all classes are detected while creating the machine learning task. Detection of possible classes is done on the analysis’s script sample. Make sure that this sample includes at least one row for each possible class. If some classes are not detected on this sample but found when fitting the algorithm, training will fail.

Furthermore, you need to ensure that all classes are present both in the train and the test set. You might need to adjust the split settings for that assertion to hold true.

Note that these constraints are more complex to handle with large number of classes and very rare classes.