Skip to content

Data Access methods


There are several possibilities for users to access NXCALS data.

In this guide we will explain and give examples how to access data using different methods:


Authorization is required for any of those methods in the form of a Kerberos token. A Kerberos keytab file must be generated and optionally the client machine must provide a valid CERN grid CA certificate. Moreover, the user must be registered on the NXCALS service to access the data.

Supported Java and Python versions

Please note that NXCALS requires a specific version of Java and Python runtimes.

Deprecated Access Methods

  • python builders were moved from to
  • java builders were moved from to
  • DevicePropertyQuery has been renamed to DevicePropertyDataQuery
  • KeyValuesQuery and VariableQuery were unified into 'DataQuery', accessible via byEntities() and byVariables() respectively

Bundled software

The NXCALS ecosystem contains many different environments and systems. Configuring a vanilla Spark instance for NXCALS data access requires a brief knowledge of the infrastructure that it depends on and interacts with. Here are some of the elements that have to be taken into account besides registration and authentication:

  • Hadoop cluster that NXCALS is running (configuration for connecting)
  • Location of the API services and brokers
  • NXCALS data access client module and its dependencies schema
  • Python execution environment configuration (for PySpark)
  • Effective Spark properties configuration
  • Basic YARN configuration (optionally for yarn mode execution)

The first set of actions (outside of the context of spark) are considered as prerequisites for Spark configuration. Those actions remain common between different user cases, since they have to be applied for any operation that requires data access via a Spark context. There is where the NXCALS bundles come very handy. Those bundles contain a pre-configured spark instance, with a single goal in mind: providing a bootstrapped instance of Spark with all the needed configuration to connect on-the-fly to a target NXCALS environment.

PySpark and Scala Spark execution contexts are supported by those bundles.

We recommend reading Spark documentation and going via the book "Spark: The Definitive Guide" by Bill Chambers and Matei Zaharia.

NXCALS package


To install the latest version of the NXCALS package, you will need to setup a Python venv with access to the ACC-PY package repository.

If you have a computer from the CERN accelerator sector (ACC):

source /acc/local/share/python/acc-py/base/pro/
acc-py venv ./venv
source ./venv/bin/activate
python -m pip install nxcals

If you don't have a computer from the CERN accelerator sector (ACC):

python3 -m venv ./venv
source ./venv/bin/activate
python -m pip install git+
python -m pip install nxcals


One of the tools included in NXCALS bundle is PySpark. To use it, first activate you Python venv:

source ./venv/bin/activate


Both the NXCALS API services and the dedicated Hadoop cluster (operating under IT-DB-SAS group) rely on Kerberos as their authentication mechanism. There is a need to provide the session with Kerberos principal and keytab location.

Using that information a Kerberos token has to be created in the environment:

# Obtain Kerberos token in the environment
# You can also skip the kinit step and use --principal <user>@CERN.CH --keytab <keytab-file>
# as options, directly to the spark-shell

$ kinit -kt /path/to/keytab/my.keytab <user>@CERN.CH

Alternatively, the principal and keytab properties can be passed to the Spark context. This can be done by modifying the file nxcals-bundle/conf/spark-defaults.conf in your Python venv:

  • for the NXCALS API services we need to add to 'spark.driver.extraJavaOptions' two additional options:
    -Dkerberos.principal=<user>@CERN.CH -Dkerberos.keytab=/path/to/keytab/my.keytab
  • for YARN-specific Kerberos configuration one can use the properties below:
    spark.yarn.keytab /path/to/keytab/my.keytab
    spark.yarn.principal <user>@CERN.CH

As well the properties can be passed as command-line options to spark session:

$ pyspark --principal <user>@CERN.CH --keytab /path/to/keytab/my.keytab

Example of PySpark session when using NXCALS package

# Activate your Python venv
$ source ./venv/bin/activate

# Now just run the pyspark executable and you are ready to go with NXCALS
(venv) $ pyspark

# Or executors=5 cores-per-executor=2 memory=3GB driver-memory=10G on YARN mode
(venv) $ pyspark --master yarn --num-executors 5 --executor-cores 2 --executor-memory 3G --conf spark.driver.memory=10G

# Or just get some help on available parameters
(venv) $ pyspark --help

# Once pyspark has started

# Import NXCALS Builders
>>> from import *

# Build a CMW specific Device/Property query
>>> query = DevicePropertyDataQuery.builder(spark).system("CMW").startTime("2018-04-02 00:00:00.000").endTime("2018-05-02 00:00:00.000").entity().parameter("CPS.TGM/FULL-TELEGRAM.STRC").build().select("USER","DEST")

# Or you can query by key-values, which is a more generic approach
>>> query =  DataQuery.builder(spark).byEntities().system("CMW").startTime("2018-04-02 00:00:00.000").endTime("2018-05-02 00:00:00.000").entity().keyValue("device","CPS.TGM").keyValue("property", "FULL-TELEGRAM.STRC").build()

# Count the records in the dataframe
>>> query.count()
Count: 644142

# Select specific fields with filter predicates
>>>"*").where("DEST like '%EAST%'").show()

# To exit pyspark type
>>> quit()

# Deactivate your Python venv
(venv) $ deactivate

Run PySpark session customized by external startup file:

There are use cases where there is a need to run a PySpark session together with some interactive start-up commands.

Such interactive commands could be part of an analysis that should be run during start-up of the session. For this purpose the path to that Python file should be put in the PYTHONSTARTUP environment variable passed to pyspark:

(venv) $ PYTHONSTARTUP="/path/to/my/" pyspark

Standalone PySpark script

Instead of running PySpark applications in the interactive mode you can submit them through spark-submit script to be run locally or in a cluster as described here.

Standalone Python application

You can also run standalone Python applications in your Python venv.

Here is a sample Python application:

from nxcals import spark_session_builder
from import DevicePropertyDataQuery

# Possible master values (the location to run the application):
# local: Run Spark locally with one worker thread (that is, no parallelism).
# local[K]: Run Spark locally with K worker threads. (Ideally, set this to the number of cores on your host.)
# local[*]: Run Spark locally with as many worker threads as logical cores on your host.
# yarn: Run using a YARN cluster manager.

spark = spark_session_builder.get_or_create(app_name='spark-basic', master='yarn')

intensity = DevicePropertyDataQuery.builder(spark).system("CMW").startTime("2018-06-17 00:00:00.000").endTime("2018-06-20 00:00:00.000").entity().parameter("PR.BCT/HotspotIntensity").build()

# count the data points

You can simply run it in your Python venv:

$ source ./venv/bin/activate
(venv) $ python


Alternatively one can build Spark session on YARN using so called flavor corresponding to predefined resource sizing with a choice of YARN_SMALL, YARN_MEDIUM and YARN_LARGE, for example:

from nxcals.spark_session_builder import get_or_create, Flavor
spark = get_or_create(flavor=Flavor.YARN_SMALL)

These can be overwritten by individual Spark properties:

spark = get_or_create(flavor=Flavor.YARN_MEDIUM, conf={'spark.executor.memory': '5g'})

As well one can specify environment - pro (default) or testbed:

 spark = get_or_create(flavor=Flavor.LARGE, hadoop_env="pro")

Scala Spark shell

The NXCALS Spark bundle also allows to run the Scala Spark shell. Similarly to PySpark it relies on Kerberos authentication which can be provided in the same way.

Sample session of spark-shell running Scala code is provided below:

$ ./bin/spark-shell

# Or executors=5 cores-per-executor=2 memory=3GB driver-memory=10G with YARN mode

$ ./bin/spark-shell --master yarn --num-executors 5 --executor-cores 2 --executor-memory 3G --conf spark.driver.memory=10G

# Or just get some help on available parameters
$ ./bin/spark-shell --help

# OPTIONALLY: Load some examples (from example.scala file, ls the dir to see)
# Once the scala shell is started please load the examples (it will run some queries)
# Execution time depends on the number of executors, memory, cores, itp
# There will be 3 datasets loaded, data, tgmData & tgmFiltered ready to play with.

scala> :load example.scala

# Wait for the script to load, watch the data being retrieved and some operations done on it.
# Once you get the prompt scala> back you can write your commands.

# Some operations on the data (see example.scala file for more or read the Spark manual)
scala> data.printSchema
scala> tgmData.printSchema


scala> tgmFiltered.count
scala> tgmFiltered.join(data, "cyclestamp").agg(sum('current')).show

# To exit spark-shell type
scala> :quit

You can look at some scala code examples for CMW.

NXCALS Spark bundle

Bundle download

The latest version of bundle can be downloaded and unzipped using the following bash command:

# Download the latest version of bundle for the NXCALS PRO environment

$ curl -s -k -O &&
unzip && rm && cd spark-*-bin-hadoop3.2.1

If required it can be obtained interactively from the following links:


NXCALS Spark bundle is using relative path configuration in order to load its resources. To allow Spark correctly pick all the required sources you need to execute its binaries from within the extracted spark directory.

Setting up Python3 virtual environment for running Pyspark:

Within the same Spark bundle for NXCALS you will find a bash script named

This script is responsible for setting-up and activating a virtual python3 environment for NXCALS needs. Then, on that virtual python environment it loads the required NXCALS data access python package (bundled with the sources), to make the API directly available on PySpark context.

Sourcing the environment preparation script

Sourcing the python virtual environment preparation script is of highly important and should not be omitted when the intention is to run a PySpark session!

That's because NXCALS sources are installed via the managed pip instance. Doing otherwise will result to a PySpark initialization in non-workable state.

Running PySpark with required Kerberos authentication:


Authentication mechanism relying on Kerberos is identical to the one presented for NXCALS Package as described previously here.

Example of PySpark session when using NXCALS Spark bundle:

# source the script to setup and activate python3 venv
source ./

# the execution of the above preparation script, will generate a new directory on the spark-bundle root
# named 'nxcals-python3-env' and is used as target for loading the virtual env

# check if python path is directly available/served from within the virtual env directory
(nxcals-python3-env)$ which python

# /path/to/spark/nxcals/bundle/spark-3.1.1-bin-hadoop3.2.1/nxcals-python3-env/bin/python

# check list of modules installed in virtual env:
pip list

# numpy (1.14.2)
# nxcals-data-access-python3 (1.0.24)
# pandas (0.24.2)
# pip (9.0.1)
# pyarrow (0.13.0)
# setuptools (28.8.0)

# If the output from both commands is similar to the one presented, we are good to go with pyspark
# Now just run the pyspark executable and you are ready to go with NXCALS config

(nxcals-python3-env)$ ./bin/pyspark

# Or executors=5 cores-per-executor=2 memory=3GB driver-memory=10G on YARN mode

(nxcals-python3-env)$ ./bin/pyspark --master yarn --num-executors 5 --executor-cores 2 --executor-memory 3G --conf spark.driver.memory=10G

# Once the context is loaded

# Import NXCALS Builders
>>> from import *

# Build a CMW specific Device/Property query
>>> query = DevicePropertyDataQuery.builder(spark).system("CMW").startTime("2018-04-02 00:00:00.000").endTime("2018-05-02 00:00:00.000")

# Or you can query by key-values, which is a more generic approach 
>>> query =  DataQuery.builder(spark).byEntities().system("CMW").startTime("2018-04-02 00:00:00.000").endTime("2018-05-02 00:00:00.000").entity()
.keyValue("device","CPS.TGM").keyValue("property", "FULL-TELEGRAM.STRC").build()

# Count the records in the dataframe
>>> query.count()
Count: 644142

# To exit pyspark type
>>> quit()

# To deactivate virtual python3 environment type
(nxcals-python3-env)$ deactivate

SWAN Notebook

Working in close collaboration with our colleagues from EP-SFT (Enric Saavedra, Danilo Piparo) and IT-DB-SAS have led to the successful integration of SWAN's Jupyter notebooks with NXCALS Hadoop cluster for accessing data stored on NXCALS service and analyzing it using Spark.

  • The SWAN platform is accessible at the following link:
  • More information about SWAN, could be found on the dedicated service homepage:
  • SWAN session can be created by following instructions available here
  • Example notebooks can be downloaded from CERNBox as described here


If you are planning to use service account instead of your personal computing account, you have to subscribe to CERNBOX service first ( and login to before using SWAN.

Java API

A Java examples project can be cloned from GitLab by following steps described here.


First install Jupyter or just Anaconda Python package from

Once done export the python driver to be the jupyter and run the ./bin/pyspark as normal:


./bin/pyspark --master yarn --num-executors 10

This will open a browser with Jupyter notebook. Please wait a bit until Spark is initialized on the cluster.


Make sure to have jupyter in your PATH. Verify using "which jupyter".

At this stage you should be able to open Jupyter notebook from the browser, however before executing any code you still have to add a path to NXCALS packages from NXCALS bundle.

Execute the following in the first cell of your notebook:

import os
import sys

sys.path.append(os.getcwd() + "/nxcals-python3-env/lib/python3.7/site-packages/")
Now your notebook is ready for the interaction with NXCALS API.