Usage of Spark in DSS

When Spark support is enabled in DSS, a large number of components feature additional options to run jobs on Spark.

SparkSQL recipes

SparkSQL recipes globally work like SQL Recipes but are not limited to SQL datasets. DSS will fetch the data and pass it on to Spark.

You can set the Spark configuration in the Advanced tab.

See SparkSQL recipes

Visual recipes

You can run Preparation and some Visual Recipes on Spark. To do so, select Spark as the execution engine and select the appropriate Spark configuration.

For each visual recipe that supports a Spark engine, you can select the engine under the “Run” button in the recipe’s main tab, and set the Spark configuration in the “Advanced” tab.

All visual data-transformation recipes support running on Spark, including:

  • Prepare

  • Sync

  • Sample / Filter

  • Group

  • Disinct

  • Join

  • Pivot

  • Sort

  • Split

  • Top N

  • Window

  • Stack

Python code

You can write Spark code using Python:

Note about Spark code in Python notebooks

All Python notebooks use the same named Spark configuration. See Spark configurations for more information about named Spark configurations.

When you change the named Spark configuration used by notebooks, you need to restart DSS afterwards.

R code

Warning

Tier 2 support: Support for SparkR and sparklyr is covered by Tier 2 support

You can write Spark code using R:

Both the recipe and the notebook support two different APIs for accessing Spark:

  • The “SparkR” API, ie. the native API bundled with Spark

  • The “sparklyr” API

Note about Spark code in R notebooks

All R notebooks use the same named Spark configuration. See Spark configurations for more information about named Spark configurations.

When you change the named Spark configuration used by notebooks, you need to restart DSS afterwards.

Scala code

You can use Scala, spark’s native language, to implement your custom logic. The Spark configuration is set in the recipe’s Advanced tab.

Interaction with DSS datasets is provided through a dedicated DSS Spark API, that makes it easy to read and write SparkSQL dataframes from datasets.

Warning

The Spark-Scala notebook is deprecated and will soon be removed

Machine Learning with MLLib

See the dedicated MLLib page.