Setting up Spark integration

For an introduction to the support of Spark in DSS, see DSS and Spark


Spark support in DSS is not restricted to Hadoop. You can install Spark and the Spark integration in DSS without a Hadoop cluster.

However, as of DSS 2.1, optimal performance will only be achieved by using HDFS datasets.

It is therefore highly recommended that you use Spark mainly for HDFS datasets and install the Hadoop integration.

Data Science Studio supports Spark 1.5 or 1.6 only.

Set up your Spark environment

If Spark 1.5 or 1.6 is included in your Hadoop distribution, you can skip this section entirely.

If that version is not included in your distribution, you can download pre-built Spark binaries for the relevant Hadoop version. You should not choose the “Pre-built with user-provided Hadoop” packages, as these do not have Hive support, which is needed for advanced SparkSQL features used by DSS.

You’ll then need to configure this Spark installation to point it to your existing Hadoop installation.

  • If you are using CDH or MapR, copy as a new executable file conf/ and set HADOOP_CONF_DIR to the location of your Hadoop configuration directory (typically to /etc/hadoop/conf).
  • For HDP, see their tutorial.

Test your Spark installation by going in the Spark directory and running

./bin/spark-shell --master yarn-client

After a little while (and possibly a lot of log messages), you should see a Scala prompt, preceded by the mention SQL context available as sqlContext. Type in the following test code:

sc.parallelize(Seq(1, 2, 3)).sum()

You should then see some more log messages and the expected 6 result. Type :quit to exit.

Set up Spark integration with DSS

Case 1: Spark integrated in your Hadoop distribution

This case applies if the “spark-submit” program for Spark 1.5 or 1.6 is in your PATH

  • Go to the Data Science Studio data directory
  • Stop DSS
./bin/dss stop
  • Run the setup
./bin/dssadmin install-spark-integration
  • Start DSS
./bin/dss start

Case 2: Manual installation of Spark

Here, we assume that you installed and configured spark in the /opt/myspark folder

  • Go to the Data Science Studio data directory
  • Stop DSS
./bin/dss stop
  • Run the setup
./bin/dssadmin install-spark-integration -sparkHome /opt/myspark
  • Start DSS
./bin/dss start

Caveat for RedHat / CentOS 6.x clusters

Using PySpark from DSS requires that the cluster executor nodes have access to a Python 2.7 interpreter. On RedHat / CentOS 6.x systems this may not be the case as the system’s default Python is 2.6 (and cannot be upgraded).

You should make sure an additional Python 2.7 is available on all cluster members, and to specify its location through an additional argument to the above command, as follows:

./bin/dssadmin install-spark-integration [-sparkHome SPARK_HOME] -pysparkPython PATH_TO_PYTHON2.7_ON_CLUSTER_EXECUTORS

Verify the installation

Go to the Administration > Settings section of DSS. The Spark tab must be available.

Configure Spark logging

Spark has DEBUG logging enabled by default; When reading non-HDFS datasets, this will lead Spark to log the whole datasets by default in the “org.apache.http.wire”.

We strongly recommend that you modify Spark logging configuration to switch the org.apache.http.wire logger to INFO mode. Please refer to Spark documentation for information about how to do this.