Exposing a R prediction model

In addition to standard models trained using the DSS visual Machine Learning component, the API node can also expose custom models written in R by the user.

To write a “custom R prediction” endpoint in an API node service, you must write a R function that takes as input the features of the record to predict and that outputs the prediction.

The custom model can optionally use DSS managed folders. This managed folder is typically used to store the serialized version of the model. The code for the custom model is written in the “API service” part DSS.

Creating the R prediction endpoint

To create a custom prediction endpoint, start by creating a service. (See Your first API service for more information). Then, create an endpoint of type “Custom prediction (R)”.

You will need to indicate whether you want to create a Regression (predicting a continous value) or a Classification (predicting a discrete value) model.

DSS prefills the Code part with a sample depending on the selected model type.

Using a managed folder

A custom model can optionally (but most of the time) use a DSS managed folder. When you package your service, the contents of the folder is bundled with the package, and your custom code receives the path to the managed folder content.

A typical usage is when you have a custom train recipe that dumps the serialized model into a folder. Your custom prediction code then uses this managed folder.

../_images/custom_model_folder.png

Structure of the code

To create a custom model, you need to write a single R function. When you create your endpoint, DSS prefills the code with several sample functions that can all work as a R prediction model.

The function takes named arguments that are the features of the model. You may use default values if you expect some features not to be present.

In the “Settings” tab of your endpoint, you must select the name of the function that is your main predictor

In the R code, you can retrieve absolute paths to the resource folders using dkuAPINodeGetResourceFolders() (returns a vector of character)

Regression

A regression prediction function can return:

  • Either a single numerical representing the predicted class

  • Or a list containing:

    • (mandatory) prediction: a single numerical representing the predicted class
    • (optional) customKeys: a list containing additional response keys that will be sent to the user

Classification

A classification prediction function can return:

  • Either a single character vector representing the predicted class

  • Or a list containing:

    • (mandatory) prediction: a single character vector representing the predicted class
    • (optional) probas: a list of class -> probability
    • (optional) customKeys: a list containing additional response keys that will be sent to the user

Testing your code

Developing a custom model implies testing often. To ease this process, a “Development server” is integrated in the DSS UI.

To test your code, click on the “Deploy to Dev Server” button. The dev server starts and load your model. You are redirected to the Test tab where you can see whether your model loads.

You can then define Test queries, i.e. JSON objects akin to the ones that you would pass to the API node user API. When you click on the “Play test queries” button, the test queries are sent to the dev server, and the result is printed.

R packages

We strongly recommend that you use code environments for deploying custom model packages if these packages use any external (not bundled with DSS) library

Server-side tuning

It is possible to tune the behavior of R prediction endpoints on the API node side. You can tune how many concurrent requests your API node can handle. This depends mainly on your model (its speed and in-memory size) and the available resources on the server running the API node.

You can configure the parallelism parameters for an endpoint by creating a JSON file in the config/services folder in the API node’s data directory.

mkdir -p config/services/<SERVICE_ID>

Then create or edit the config/services/<SERVICE_ID>/<ENDPOINT_ID>.json file

This file must have the following structure and be valid JSON:

{
    "pool" : {
        "floor" : 1,
        "ceil" : 8,
        "cruise": 2,
        "queue" : 16,
        "timeout" : 10000
    }

}

This configuration allows you to control the number of allocated pipelines.

One allocated pipeline means one R process running your code, preloaded with your initialization code, and ready to serve a prediction request. If you have 2 allocated pipelines (meaning 2 R processes), 2 requests can be handled simultaneously, other requests will be queued until one of the pipelines is freed (or the request times out). When the queue is full, additional requests are rejected.

Those parameters are all positive integers:

  • floor (default: 1): Minimum number of pipelines. Those are allocated as soon as the endpoint is loaded.
  • ceil (default: 8): Maximum number of allocated pipelines at any given time. Additional requests will be queued. ceil floor
  • cruise (default: 2): The “nominal” number of allocated pipelines. When more requests come in, more pipelines may be allocated up to ceil. But when all pending requests have been completed, the number of pipeline may go down to cruise. floor cruise ceil
  • queue (default: 16): The number of requests that will be queued when ceil pipelines are already allocated and busy. The queue is fair: first received request will be handled first.
  • timeout (default: 10000): Time, in milliseconds, that a request may spend in queue wating for a free pipeline before being rejected.

Each R process will only serve a single request at a time.

It is important to set “cruise”:

  • At a high-enough value to serve your expected reasonable peak traffic. If you set cruise too low, DSS will kill excedental R processes, and will need to recreate a new one just afterwards.
  • But also at a not-too-high value, because each pipeline implies a running R process consuming the memory required by the model.

You can also deploy your service on multiple servers, see High availability and scalability.