Data Access methods
Introduction
There are several possibilities for users to access NXCALS data.
In this guide we will explain and give examples how to access data using different methods:
- NXCALS Spark bundle including PySpark tool
- Running PySpark as standalone application
- SWAN Notebook
- Java API
- Jupyter
- Python application outside of PySpark shell
- Scala Spark shell (part of NXCALS Spark bundle)
Note
Authorization is required for any of those methods in the form of a Kerberos token. A Kerberos keytab file must be generated and optionally the client machine must provide a valid CERN grid CA certificate. Moreover, the user must be registered on the NXCALS service to access the data.
Supported Java and Python versions
Please note that NXCALS requires a specific version of Java and Python runtimes.
Deprecated Access Methods
- python builders were moved from
cern.nxcals.pyquery.builders
tocern.nxcals.api.extraction.data.builders
- java builders were moved from
cern.nxcals.data.access.builders
tocern.nxcals.api.extraction.data.builders
DevicePropertyQuery
has been renamed toDevicePropertyDataQuery
KeyValuesQuery
andVariableQuery
were unified into 'DataQuery', accessible viabyEntities()
andbyVariables()
respectively
NXCALS Spark bundle
The NXCALS ecosystem contains many different environments and systems. Configuring a vanilla Spark instance for NXCALS data access requires a brief knowledge of the infrastructure that it depends on and interacts with. Here are some of the elements that have to be taken into account besides registration and authentication:
- Hadoop cluster that NXCALS is running (configuration for connecting)
- Location of the API services and brokers
- NXCALS data access client module and its dependencies schema
- Python execution environment configuration (for PySpark)
- Effective Spark properties configuration
- Basic YARN configuration (optionally for yarn mode execution)
The first set of actions (outside of the context of spark) are considered as prerequisites for Spark configuration. Those actions remain common between different user cases, since they have to be applied for any operation that requires data access via a Spark context. There is where the NXCALS Spark bundle comes very handy. That bundle contains a pre-configured spark instance, with a single goal in mind: providing a downloadable, bootstrapped instance of Spark with all the needed configuration to connect on-the-fly to a target NXCALS environment.
PySpark and Scala Spark execution contexts are supported by this bundle.
We recommend reading Spark documentation and going via the book "Spark: The Definitive Guide" by Bill Chambers and Matei Zaharia.
Bundle install
To install the latest version of the NXCALS bundle, you will need to setup a Python venv
with access to the ACC-PY package repository.
If you have a computer from the CERN accelerator sector (ACC):
source /acc/local/share/python/acc-py/base/pro/setup.sh
acc-py venv ./venv
source ./venv/bin/activate
python -m pip install nxcals
If you don't have a computer from the CERN accelerator sector (ACC):
python3 -m venv ./venv
source ./venv/bin/activate
python -m pip install git+https://gitlab.cern.ch/acc-co/devops/python/acc-py-pip-config.git
python -m pip install nxcals
PySpark
One of the tools included in NXCALS bundle is PySpark. To use it, first activate you Python venv:
source ./venv/bin/activate
pyspark
Authentication
Both the NXCALS API services and the dedicated Hadoop cluster (operating under IT-DB-SAS group) rely on Kerberos as their authentication mechanism. There is a need to provide the session with Kerberos principal and keytab location.
Using that information a Kerberos token has to be created in the environment:
# Obtain Kerberos token in the environment
# You can also skip the kinit step and use --principal <user>@CERN.CH --keytab <keytab-file>
# as options, directly to the spark-shell
$ kinit -kt /path/to/keytab/my.keytab <user>@CERN.CH
Alternatively, the principal and keytab properties can be passed to the Spark context. This can be done by modifying the file nxcals-bundle/conf/spark-defaults.conf in your Python venv:
- for the NXCALS API services we need to add to 'spark.driver.extraJavaOptions' two additional options:
-Dkerberos.principal=<user>@CERN.CH -Dkerberos.keytab=/path/to/keytab/my.keytab
- for YARN-specific Kerberos configuration one can use the properties below:
spark.yarn.keytab /path/to/keytab/my.keytab spark.yarn.principal <user>@CERN.CH
As well the properties can be passed as command-line options to spark session:
$ pyspark --principal <user>@CERN.CH --keytab /path/to/keytab/my.keytab
Example of PySpark session
# Activate your Python venv
$ source ./venv/bin/activate
# Now just run the pyspark executable and you are ready to go with NXCALS
(venv) $ pyspark
# Or executors=5 cores-per-executor=2 memory=3GB driver-memory=10G on YARN mode
(venv) $ pyspark --master yarn --num-executors 5 --executor-cores 2 --executor-memory 3G --conf spark.driver.memory=10G
# Or just get some help on available parameters
(venv) $ pyspark --help
# Once pyspark has started
# Import NXCALS Builders
>>> from cern.nxcals.api.extraction.data.builders import *
# Build a CMW specific Device/Property query
>>> query = DevicePropertyDataQuery.builder(spark).system("CMW").startTime("2018-04-02 00:00:00.000").endTime("2018-05-02 00:00:00.000").entity().parameter("CPS.TGM/FULL-TELEGRAM.STRC").build().select("USER","DEST")
# Or you can query by key-values, which is a more generic approach
>>> query = DataQuery.builder(spark).byEntities().system("CMW").startTime("2018-04-02 00:00:00.000").endTime("2018-05-02 00:00:00.000").entity().keyValue("device","CPS.TGM").keyValue("property", "FULL-TELEGRAM.STRC").build()
# Count the records in the dataframe
>>> query.count()
Count: 644142
# Select specific fields with filter predicates
>>> query.select("*").where("DEST like '%EAST%'").show()
# To exit pyspark type
>>> quit()
# Deactivate your Python venv
(venv) $ deactivate
Run PySpark session customized by external startup file:
There are use cases where there is a need to run a PySpark session together with some interactive start-up commands.
Such interactive commands could be part of an analysis that should be run during start-up of the session. For this purpose the path to that Python file should be put in the PYTHONSTARTUP
environment variable passed to pyspark
:
(venv) $ PYTHONSTARTUP="/path/to/my/startup.py" pyspark
Standalone PySpark script
Instead of running PySpark applications in the interactive mode you can submit them through spark-submit
script to be run locally or in a cluster as described here.
Standalone Python application
You can also run standalone Python applications in your Python venv.
Here is a sample Python application:
from nxcals import spark_session_builder
from cern.nxcals.api.extraction.data.builders import DevicePropertyDataQuery
# Possible master values (the location to run the application):
# local: Run Spark locally with one worker thread (that is, no parallelism).
# local[K]: Run Spark locally with K worker threads. (Ideally, set this to the number of cores on your host.)
# local[*]: Run Spark locally with as many worker threads as logical cores on your host.
# yarn: Run using a YARN cluster manager.
spark = spark_session_builder.get_or_create(app_name='spark-basic', master='yarn')
intensity = DevicePropertyDataQuery.builder(spark).system("CMW").startTime("2018-06-17 00:00:00.000").endTime("2018-06-20 00:00:00.000").entity().parameter("PR.BCT/HotspotIntensity").build()
# count the data points
intensity.count()
intensity.show()
You can simply run it in your Python venv:
$ source ./venv/bin/activate
(venv) $ python sample.py
Scala Spark shell
The NXCALS Spark bundle also allows to run the Scala Spark shell. Similarly to PySpark it relies on Kerberos authentication which can be provided in the same way.
Sample session of spark-shell running Scala code is provided below:
$ ./bin/spark-shell
# Or executors=5 cores-per-executor=2 memory=3GB driver-memory=10G with YARN mode
$ ./bin/spark-shell --master yarn --num-executors 5 --executor-cores 2 --executor-memory 3G --conf spark.driver.memory=10G
# Or just get some help on available parameters
$ ./bin/spark-shell --help
# OPTIONALLY: Load some examples (from example.scala file, ls the dir to see)
# Once the scala shell is started please load the examples (it will run some queries)
#
# Execution time depends on the number of executors, memory, cores, itp
# There will be 3 datasets loaded, data, tgmData & tgmFiltered ready to play with.
scala> :load example.scala
# Wait for the script to load, watch the data being retrieved and some operations done on it.
# Once you get the prompt scala> back you can write your commands.
# Some operations on the data (see example.scala file for more or read the Spark manual)
scala> data.printSchema
scala> tgmData.printSchema
scala> data.show
scala> tgmData.show
scala> tgmFiltered.count
scala> tgmFiltered.join(data, "cyclestamp").agg(sum('current')).show
# To exit spark-shell type
scala> :quit
You can look at some scala code examples for CMW.
SWAN Notebook
Working in close collaboration with our colleagues from EP-SFT (Enric Saavedra, Danilo Piparo) and IT-DB-SAS have led to the successful integration of SWAN's Jupyter notebooks with NXCALS Hadoop cluster for accessing data stored on NXCALS service and analyzing it using Spark.
- The SWAN platform is accessible at the following link: https://swan.cern.ch/
- More information about SWAN, could be found on the dedicated service homepage: https://swan.web.cern.ch/
- SWAN session can be created by following instructions available here
- Example notebooks can be downloaded from CERNBox as described here
Important
If you are planning to use service account instead of your personal computing account, you have to subscribe to CERNBOX service first (https://resources.web.cern.ch/resources/) and login to https://cernbox.cern.ch/ before using SWAN.
Java API
A Java examples project can be cloned from GitLab by following steps described here.
Jupyter
First install Jupyter or just Anaconda Python package from https://www.anaconda.com/download/#download.
Once done export the python driver to be the jupyter and run the ./bin/pyspark as normal:
export PYSPARK_DRIVER_PYTHON=jupyter
export PYSPARK_DRIVER_PYTHON_OPTS='notebook'
./bin/pyspark --master yarn --num-executors 10
This will open a browser with Jupyter notebook. Please wait a bit until Spark is initialized on the cluster.
Important
Make sure to have jupyter in your PATH. Verify using "which jupyter".
At this stage you should be able to open Jupyter notebook from the browser, however before executing any code you still have to add a path to NXCALS packages from NXCALS bundle.
Execute the following in the first cell of your notebook:
import os
import sys
sys.path.append(os.getcwd() + "/nxcals-python3-env/lib/python3.7/site-packages/")