Difference between revisions of "AnalysisAtNikhef"

From ALICE Wiki
Jump to navigation Jump to search
Line 41: Line 41:
 
===Running analyses on the Stoomboot cluster===
 
===Running analyses on the Stoomboot cluster===
 
We have two local computer clusters:
 
We have two local computer clusters:
* [[Stoomboot_ALICE|Stoomboot]] at Nikhef
+
* [[Stoomboot_cluster|Stoomboot]] at Nikhef ([[Stoomboot_ALICE|ALICE software on Stoomboot]])
 
* Quark at Utrecht University. You need a UU computing (Solis) account for this one. Some [http://www.staff.science.uu.nl/~leeuw179/bachelor_research/quark_cluster.html basic instructions are here].
 
* Quark at Utrecht University. You need a UU computing (Solis) account for this one. Some [http://www.staff.science.uu.nl/~leeuw179/bachelor_research/quark_cluster.html basic instructions are here].
  
 
If you are new in the group, [https://mailman.nikhef.nl/mailman/listinfo/alice-group subscribe to our mailing list]
 
If you are new in the group, [https://mailman.nikhef.nl/mailman/listinfo/alice-group subscribe to our mailing list]

Revision as of 21:39, 25 September 2016

Running analyses at Nikhef

General information

This is an index of Technical/computing info for the ALICE group

If you want to launch your analysis at the batch farm of Nikhef and you try to connect outside the Nikhef domain then first connect to the login server

. ssh <username>@login.nikhef.nl

Please note that you should never try to run anything on this machine (e.g. not even using a browser), since this is the entry point for everybody to the Nikhef domain. For light things (e.g. browsing a web page, downloading a paper etc) connect to one of the desktops of our group:

. vesdre, romanche, hamme, blavet, mella, mulde, olona, sacco, luhe

In case you need to run some analysis interactively that needs to access data stored on the common disk then you need to connect on one of the interactive nodes:

. stbc-i1, stbc-i2, stbc-i4

Storage information

  • Each user is allocated a limited space under the home directory. Please note that you should use this space for keeping light-weighted files e.g. macros, pdf files,...
  • As a group we are given 300GB under /project/alice that is used to install our common software there. Please do not use this space to store large files e.g. root files. This directory can be accessed by all machines i.e. both desktops (see above the names) and the interactive and batch nodes of Nikhef.
  • The current common storage of Nikhef is called glusterfs and is accessed only by the interactive and batch nodes of Nikhef. As a group we have ~12TB of disk space under /glusterfs/alice1. This is currently the place where you can store your files for analysis, production etc.
  • We also requested additional space that was recently configured. The size is 300TB and it is based on dcache, an efficient system to store and retrieve large files. At this moment only 10TB are available for usage and are being tested and validated under /dcache/alice. In case you want access to it please send a mail to Panos Christakoglou. Note that this storage won't be seen by the AliEn but rather it is reserved for local usage. It has the possibility to see the GRID file catalogue and thus allows copying productions (i.e. the typical use case of this storage for the group).

ALICE software and how to access it

The ALICE software is not installed anymore locally at Nikhef. If you still require a version to be installed (e.g. for debugging, development) please send a mail to Panos Christakoglou, indicating the tag you need.

Accessing the ALICE software can now be done via cvmfs which is centrally installed on all machines at Nikhef. To get it working, first add the following line to your .bashrc (or whichever shell script you use):

. source /cvmfs/alice.cern.ch/etc/login.sh

To list the available modules type:

. alienv q | grep AliPhysics

To load the ALICE environment on your Nikhef node, use:

. alienv enter VO_ALICE@AliPhysics::vAN-20160905-1

by replacing the physics tag with the one you want to use

Running analyses on the Stoomboot cluster

We have two local computer clusters:

If you are new in the group, subscribe to our mailing list