Skip to content

Using SWAN

One of the available methods of accessing NXCALS data is through a SWAN notebook session.

How to create a SWAN session


Please note that currently, user can open maximum of two Spark connections (to the same cluster) for each SWAN session.

In other words, you can open a second notebook in the same SWAN session having a second SparkContext. Creation of third connection within the same session it will not work. This restriction could be removed if necessary at a later stage.

  1. Log in with your CERN NICE credentials at:
  2. Provide the following information:

    • Software stack: NXCals Python3 (to ensure the latest NXCALS libraries)
    • Platform: use proposed value proposed
    • Number of cores: select required
    • Memory: select required
    • Spark cluster: BE NXCALS (NXCals)


  3. From a list of Projects select a notebook to work with or create a new one. At this point, one can notice that the platform provides a possibility to upload to / download from that directory and to share notebooks with others.


  4. Authenticate using NICE credentials by clicking on STAR button in the navigation bar (that action will establish Spark clusters connection). Ensure that the " Include NXCALS options" check-box is enabled. With the NXCALS options in place, the "Selected configuration" region, immediately displays all the configuration that would be applied on the Spark session creation.



  5. Execute content of cells


    from import DataQuery
    df = DataQuery.builder(spark).entities().system('CMW') \
        .keyValuesEq({'device': 'LHC.LUMISERVER', 'property': 'CrossingAngleIP1'}) \
        .timeWindow('2022-04-22 00:00:00.000', '2022-04-23 00:00:00.000') \

Example notebooks

There is a number of notebooks related to NXCALS project with some basic examples:

  • NXCALS-Demo (from presentation given during one of BE-CO technical meetings)
  • NXCALS-Example (data retrieval using NXCALS builders, chart creation using matplotlib)
  • Pandas-example (simple data manipulations using Pandas instead of Spark DataFrames)

They can be directly downloaded from CERNBox: Screenshot

and then uploaded to SWAN projects: Screenshot