Error codes¶
Below are the error codes that you may encounter while using DSS.
- ERR_ACTIVITY_DIRECTORY_SIZE_LIMIT_REACHED: Job activity directory size limit reached
- ERR_BUNDLE_ACTIVATE_BAD_CONNECTION_PERMISSIONS: Connection is not freely usable
- ERR_BUNDLE_ACTIVATE_BAD_CONNECTION_TYPE: Connection is the wrong type
- ERR_BUNDLE_ACTIVATE_CONNECTION_NOT_WRITABLE: Connection is not writable
- ERR_BUNDLE_ACTIVATE_MISSING_CONNECTION: Connection is missing
- ERR_CLUSTERS_INVALID_SELECTED: Selected cluster does not exist
- ERR_CODEENV_CONTAINER_IMAGE_FAILED: Could not build container image for this code environment
- ERR_CODEENV_CONTAINER_IMAGE_TAG_NOT_FOUND: Container image tag not found for this Code environment
- ERR_CODEENV_CREATION_FAILED: Could not create this code environment
- ERR_CODEENV_DELETION_FAILED: Could not delete this code environment
- ERR_CODEENV_EXISTING_ENV: Code environment already exists
- ERR_CODEENV_INCORRECT_ENV_TYPE: Wrong type of Code environment
- ERR_CODEENV_INVALID_CODE_ENV_ARCHIVE: Invalid code environment archive
- ERR_CODEENV_JUPYTER_SUPPORT_INSTALL_FAILED: Could not install Jupyter support in this code environment
- ERR_CODEENV_JUPYTER_SUPPORT_REMOVAL_FAILED: Could not remove Jupyter support from this code environment
- ERR_CODEENV_MISSING_DEEPHUB_ENV: Code environment for deep learning does not exist
- ERR_CODEENV_MISSING_ENV: Code environment does not exists
- ERR_CODEENV_MISSING_ENV_VERSION: Code environment version does not exists
- ERR_CODEENV_NO_CREATION_PERMISSION: User not allowed to create Code environments
- ERR_CODEENV_NO_USAGE_PERMISSION: User not allowed to use this Code environment
- ERR_CODEENV_NOT_USING_LATEST_DEEPHUB_ENV: Not using latest version of code environment for deep learning
- ERR_CODEENV_UNSUPPORTED_OPERATION_FOR_ENV_TYPE: Operation not supported for this type of Code environment
- ERR_CODEENV_UPDATE_FAILED: Could not update this code environment
- ERR_CONNECTION_ALATION_REGISTRATION_FAILED: Failed to register Alation integration
- ERR_CONNECTION_API_BAD_CONFIG: Bad configuration for connection
- ERR_CONNECTION_AZURE_INVALID_CONFIG: Invalid Azure connection configuration
- ERR_CONNECTION_DUMP_FAILED: Failed to dump connection tables
- ERR_CONNECTION_INVALID_CONFIG: Invalid connection configuration
- ERR_CONNECTION_LIST_HIVE_FAILED: Failed to list indexable Hive connections
- ERR_CONNECTION_S3_INVALID_CONFIG: Invalid S3 connection configuration
- ERR_CONNECTION_SQL_INVALID_CONFIG: Invalid SQL connection configuration
- ERR_CONNECTION_SSH_INVALID_CONFIG: Invalid SSH connection configuration
- ERR_CONTAINER_CONF_NO_USAGE_PERMISSION: User not allowed to use this containerized execution configuration
- ERR_CONTAINER_CONF_NOT_FOUND: The selected container configuration was not found
- ERR_CONTAINER_IMAGE_PUSH_FAILED: Container image push failed
- ERR_DASHBOARD_EXPORT_SAND_BOXING_ERROR: Chrome cannot start in the “sandbox” mode
- ERR_DATASET_ACTION_NOT_SUPPORTED: Action not supported for this kind of dataset
- ERR_DATASET_CSV_ROW_TOO_LARGE: Error in CSV file: Dataset row is too long to be processed
- ERR_DATASET_CSV_UNTERMINATED_QUOTE: Error in CSV file: Unterminated quote
- ERR_DATASET_HIVE_INCOMPATIBLE_SCHEMA: Dataset schema not compatible with Hive
- ERR_DATASET_INVALID_CONFIG: Invalid dataset configuration
- ERR_DATASET_INVALID_FORMAT_CONFIG: Invalid format configuration for this dataset
- ERR_DATASET_INVALID_METRIC_IDENTIFIER: Invalid metric identifier
- ERR_DATASET_INVALID_PARTITIONING_CONFIG: Invalid dataset partitioning configuration
- ERR_DATASET_PARTITION_EMPTY: Input partition is empty
- ERR_DATASET_TRUNCATED_COMPRESSED_DATA: Error in compressed file: Unexpected end of file
- ERR_ENDPOINT_INVALID_CONFIG: Invalid configuration for API Endpoint
- ERR_EXPORT_OUTPUT_TOO_LARGE: Export file size limit reached
- ERR_FOLDER_INVALID_CONFIG: Invalid managed folder configuration
- ERR_FOLDER_INVALID_PARTITIONING_CONFIG: Invalid folder partitioning configuration
- ERR_FORMAT_BOUNDING_BOXES: Invalid format of column representing bounding boxes
- ERR_FORMAT_LINE_TOO_LARGE: Line is too long to be processed
- ERR_FORMAT_TYPE_MISSING: Dataset is missing a format type
- ERR_FSPROVIDER_CANNOT_CREATE_FOLDER_ON_DIRECTORY_UNAWARE_FS: Cannot create a folder on this type of file system
- ERR_FSPROVIDER_DEST_PATH_ALREADY_EXISTS: Destination path already exists
- ERR_FSPROVIDER_FSLIKE_REACH_OUT_OF_ROOT: Illegal attempt to access data out of connection root path
- ERR_FSPROVIDER_HTTP_CONNECTION_FAILED: HTTP connection failed
- ERR_FSPROVIDER_HTTP_INVALID_URI: Invalid HTTP URI
- ERR_FSPROVIDER_HTTP_REQUEST_FAILED: HTTP request failed
- ERR_FSPROVIDER_ILLEGAL_PATH: Illegal path for that file system
- ERR_FSPROVIDER_INVALID_CONFIG: Invalid configuration
- ERR_FSPROVIDER_INVALID_FILE_NAME: Invalid file name
- ERR_FSPROVIDER_LOCAL_LIST_FAILED: Could not list local directory
- ERR_FSPROVIDER_PATH_DOES_NOT_EXIST: Path in dataset or folder does not exist
- ERR_FSPROVIDER_ROOT_PATH_DOES_NOT_EXIST: Root path of the dataset or folder does not exist
- ERR_FSPROVIDER_SSH_CONNECTION_FAILED: Failed to establish SSH connection
- ERR_FSPROVIDER_TOO_MANY_FILES: Attempted to enumerate too many files
- ERR_HIVE_HS2_CONNECTION_FAILED: Failed to establish HiveServer2 connection
- ERR_HIVE_LEGACY_UNION_SUPPORT: Your current Hive version doesn’t support UNION clause but only supports UNION ALL, which does not remove duplicates
- ERR_JOB_INPUT_DATASET_NOT_READY_NO_FILES: Input dataset is not ready (no files found)
- ERR_LICENSING_TRIAL_INTERNAL_ERROR: Internal error trying to get a trial token
- ERR_LICENSING_TRIAL_STATUS_ERROR: Internal error trying to get a trial status
- ERR_METRIC_DATASET_COMPUTATION_FAILED: Metrics computation completely failed
- ERR_METRIC_ENGINE_RUN_FAILED: One of the metrics engine failed to run
- ERR_MISC_DISK_FULL: Disk is almost full
- ERR_MISC_EIDB: Missing, locked, unreachable or corrupted internal database
- ERR_MISC_ENOSPC: Out of disk space
- ERR_MISC_EOPENF: Too many open files
- ERR_ML_MODEL_DETAILS_OVERFLOW: Model details exceed size limit
- ERR_ML_VERTICA_NOT_SUPPORTED: Vertica ML backend is no longer supported
- ERR_NOT_USABLE_FOR_USER: You may not use this connection
- ERR_OBJECT_OPERATION_NOT_AVAILABLE_FOR_TYPE: Operation not supported for this kind of object
- ERR_PLUGIN_CANNOT_LOAD: Plugin cannot be loaded
- ERR_PLUGIN_COMPONENT_NOT_INSTALLED: Plugin component not installed or removed
- ERR_PLUGIN_DEV_INVALID_COMPONENT_PARAMETER: Invalid parameter for plugin component creation
- ERR_PLUGIN_DEV_INVALID_DEFINITION: The descriptor of the plugin is invalid
- ERR_PLUGIN_MISSING_IN_CONTAINER_IMAGE: Plugin is missing in container image
- ERR_PLUGIN_INVALID_DEFINITION: The plugin’s definition is invalid
- ERR_PLUGIN_NOT_INSTALLED: Plugin not installed or removed
- ERR_PLUGIN_WITHOUT_CODEENV: The plugin has no code env specification
- ERR_PLUGIN_WRONG_TYPE: Unexpected type of plugin
- ERR_PROJECT_INVALID_ARCHIVE: Invalid project archive
- ERR_PROJECT_INVALID_PROJECT_KEY: Invalid project key
- ERR_PROJECT_UNKNOWN_PROJECT_KEY: Unknown project key
- ERR_RECIPE_CANNOT_CHANGE_ENGINE: Cannot change engine
- ERR_RECIPE_CANNOT_CHECK_SCHEMA_CONSISTENCY: Cannot check schema consistency
- ERR_RECIPE_CANNOT_CHECK_SCHEMA_CONSISTENCY_EXPENSIVE: Cannot check schema consistency: expensive checks disabled
- ERR_RECIPE_CANNOT_CHECK_SCHEMA_CONSISTENCY_NEEDS_BUILD: Cannot compute output schema with an empty input dataset. Build the input dataset first.
- ERR_RECIPE_CANNOT_CHECK_SCHEMA_CONSISTENCY_ON_RECIPE_TYPE: Cannot check schema consistency on this kind of recipe
- ERR_RECIPE_CANNOT_CHECK_SCHEMA_CONSISTENCY_WITH_RECIPE_CONFIG: Cannot check schema consistency because of recipe configuration
- ERR_RECIPE_CANNOT_CHANGE_ENGINE: Not compatible with Spark
- ERR_RECIPE_CANNOT_USE_ENGINE: Cannot use the selected engine for this recipe
- ERR_RECIPE_ENGINE_NOT_DWH: Error in recipe engine: SQLServer is not Data Warehouse edition
- ERR_RECIPE_INCONSISTENT_I_O: Inconsistent recipe input or output
- ERR_RECIPE_SYNC_AWS_DIFFERENT_REGIONS: Error in recipe engine: Redshift and S3 are in different AWS regions
- ERR_RECIPE_PDEP_UPDATE_REQUIRED: Partition dependency update required
- ERR_RECIPE_SPLIT_INVALID_COMPUTED_COLUMNS: Invalid computed column
- ERR_SCENARIO_INVALID_STEP_CONFIG: Invalid scenario step configuration
- ERR_SECURITY_CRUD_INVALID_SETTINGS: The user attributes submitted for a change are invalid
- ERR_SECURITY_GROUP_EXISTS: The new requested group already exists
- ERR_SECURITY_INVALID_NEW_PASSWORD: The new password is invalid
- ERR_SECURITY_INVALID_PASSWORD: The password hash from the database is invalid
- ERR_SECURITY_DECRYPTION_FAILED: Decryption failed due to invalid HMAC
- ERR_SECURITY_MUS_USER_UNMATCHED: The DSS user is not configured to be matched onto a system user
- ERR_SECURITY_PATH_ESCAPE: The requested file is not within any allowed directory
- ERR_SECURITY_USER_EXISTS: The requested user for creation already exists
- ERR_SECURITY_WRONG_PASSWORD: The old password provided for password change is invalid
- ERR_SPARK_FAILED_DRIVER_OOM: Spark failure: out of memory in driver
- ERR_SPARK_FAILED_TASK_OOM: Spark failure: out of memory in task
- ERR_SPARK_FAILED_YARN_KILLED_MEMORY: Spark failure: killed by YARN (excessive memory usage)
- ERR_SPARK_PYSPARK_CODE_FAILED_UNSPECIFIED: Pyspark code failed
- ERR_SPARK_SQL_LEGACY_UNION_SUPPORT: Your current Spark version doesn’t support UNION clause but only supports UNION ALL, which does not remove duplicates
- ERR_SQL_CANNOT_LOAD_DRIVER: Failed to load database driver
- ERR_SQL_DB_UNREACHABLE: Failed to reach database
- ERR_SQL_IMPALA_MEMORYLIMIT: Impala memory limit exceeded
- ERR_SQL_POSTGRESQL_TOOMANYSESSIONS: too many sessions open concurrently
- ERR_SQL_TABLE_NOT_FOUND: SQL Table not found
- ERR_SQL_VERTICA_TOOMANYROS: Error in Vertica: too many ROS
- ERR_SQL_VERTICA_TOOMANYSESSIONS: Error in Vertica: too many sessions open concurrently
- ERR_SYNAPSE_CSV_DELIMITER: Bad delimiter setup
- ERR_TRANSACTION_FAILED_ENOSPC: Out of disk space
- ERR_TRANSACTION_GIT_COMMMIT_FAILED: Failed committing changes
- ERR_USER_ACTION_FORBIDDEN_BY_PROFILE: Your user profile does not allow you to perform this action
- INFO_RECIPE_POTENTIAL_FAST_PATH: Potential fast path configuration
- INFO_RECIPE_IMPALA_POTENTIAL_FAST_PATH: Potential Impala fast path configuration
- WARN_ACTIVITY_WAITING_K8S_CONTAINERSTARTING_CLOUD: Execution container is initializing
- WARN_ACTIVITY_WAITING_K8S_POD_PENDING_CLOUD: Container will start soon
- WARN_ACTIVITY_WAITING_K8S_QUOTA_EXCEEDED_CLOUD: You have exceeded your RAM and CPU quotas
- WARN_ACTIVITY_WAITING_QUEUED_CLOUD: Your activity is queued
- WARN_CLUSTERS_NONE_SELECTED_GLOBAL: No default cluster selected
- WARN_CLUSTERS_NONE_SELECTED_PROJECT: No cluster selected in project
- WARN_CONNECTION_HDFS_ACL_SUBDIRECTORY: subdirectory ACL synchronization mode
- WARN_CONNECTION_NO_HADOOP_INTERFACE: no Hadoop interface set
- WARN_CONNECTION_DATABRICKS_NO_AUTOFASTWRITE: automatic fast-write disabled
- WARN_CONNECTION_SNOWFLAKE_NO_AUTOFASTWRITE: automatic fast-write disabled
- WARN_CONNECTION_SPARK_NO_GROUP_WITH_DETAILS_READ_ACCESS: no groups allowed to read connection details
- WARN_FOLDER_CONNECTION_TYPE_ERROR: Invalid connection linked to a managed folder
- WARN_JOBS_MAX_OVER_MAX_ACTIVITIES: Jobs - Max jobs is over max activities
- WARN_JOBS_MAX_TOO_HIGH: Jobs - Max value too high
- WARN_JOBS_NO_LIMIT: Jobs - No limits set
- WARN_JVM_CONFIG_XMX_IN_RED_ZONE: Sub optimal Xmx value
- WARN_JVM_CONFIG_KERNEL_XMX_OVER_THRESHOLD: Xmx value for kernel over threshold
- WARN_MISC_AUDIT_NO_LOG4J_LOCAL_TARGET: No Log4j local target
- WARN_MISC_CODE_ENV_BUILTIN_MODIFIED: Built-in code env modified
- WARN_MISC_CODE_ENV_DEPRECATED_INTERPRETER: Deprecated Python interpreter
- WARN_MISC_CODE_ENV_USES_PYSPARK: pyspark installed in a code environment
- WARN_MISC_DISK_MOUNT_TYPE: non recommended filesystem type
- WARN_MISC_DISK_NOEXEC_FLAG: noexec flag
- WARN_MISC_DISK_ROTATIONAL: Rotational hard drives
- WARN_MISC_ENVVAR_SPECIAL_CHAR: Environment variables with special characters
- WARN_MISC_EVENT_SERVER_NO_TARGET: No target
- WARN_MISC_JDBC_JARS_CONFLICT: JDBC drivers - some JARs are prone to version conflicts
- WARN_MISC_LARGE_INTERNAL_DB: internal runtime database is too large
- WARN_PROJECT_LARGE_JOB_HISTORY: Projects - Too old or too many job logs
- WARN_PROJECT_LARGE_SCENARIO_HISTORY: Projects - Too old or too many scenario run logs
- WARN_PROJECT_LARGE_STREAMING_HISTORY: Projects - Too old or too many continuous activities logs
- WARN_RECIPE_SPARK_INDIRECT_HDFS: No direct access to read/write HDFS dataset
- WARN_RECIPE_SPARK_INDIRECT_S3: No direct access to read/write S3 dataset
- WARN_SPARK_NON_DISTRIBUTED_READ: Input dataset is read in a non-distributed way
- WARN_SPARK_NON_DISTRIBUTED_WRITE: Output dataset is written in a non-distributed way
- WARN_SPARK_UDFS_MAY_BE_BROKEN: Python UDFs may fail
- WARN_SPARK_TASK_OOM: Some Spark tasks encountered out of memory
- WARN_SPARK_TASK_DISKFULL: Some Spark tasks encountered disk space issues
- WARN_SPARK_K8S_KILLED_EXECUTORS: Some Kubernetes executors were killed
- WARN_SPARK_MISSING_DRIVER_TO_EXECUTOR_CONNECTIVITY: The Spark driver cannot call into the executors
- WARN_SPARK_WITH_DATABRICKS_DATASET: Not leveraging Databricks compute
- WARN_SECURITY_NO_CGROUPS: cgroups for resource control are not enabled
- WARN_SECURITY_UIF_NOT_ENABLED: User Isolation Framework is not enabled
- Undocumented error