Difference between revisions of "CSC Ntuple production"

From Atlas Wiki
Jump to navigation Jump to search
Line 127: Line 127:
  
 
It executes the AtlasRelease12.0.6 package to set up Athena r12.0.6, the DataManager package to copy AOD from the SE to the worker node (WN). Then the AOD is converted with the TopViewAODtoNtuple package to a TVNtuple and finally the TVNtuple is copied to the SE.<br>
 
It executes the AtlasRelease12.0.6 package to set up Athena r12.0.6, the DataManager package to copy AOD from the SE to the worker node (WN). Then the AOD is converted with the TopViewAODtoNtuple package to a TVNtuple and finally the TVNtuple is copied to the SE.<br>
The line with <\tt>[$SEQ/10]<tt> divides the AOD dataset into 10 equal sized subsets. So if there are 50 AOD's in the dataset, each job (1-10) converts 5 AOD's. <i>Be careful: the number of AOD's in a dataset may change over time!</i>
+
The line with <tt>[$SEQ/10]<\tt> divides the AOD dataset into 10 equal sized subsets. So if there are 50 AOD's in the dataset, each job (1-10) converts 5 AOD's. <i>Be careful: the number of AOD's in a dataset may change over time!</i>
  
 
====Playing around with a job====
 
====Playing around with a job====

Revision as of 14:15, 6 July 2007

Setting up Grid tools

This wiki describes how to set things up for ntuple production on the Grid. You need a Grid certificate [1] and some patience ;-)

    Two tips:
  • use voms-proxy-init instead of grid-proxy-init
  • use bash for shell scripts

GridTools

Wouter wrote some very nice tools to submit/retrieve jobs from the Grid. They can be obtained from the Nikhef CVS:

cvs -d /project/atlas/cvs co GridTools

The GridTools package contains a few shell scripts:

dmgr
to manage datasets on the Grid
gpm
to set up packages on the SE for Grid jobs (to prevent jobs sizes > input sandbox)
gridmgr
main tool to manage Grid jobs (submit/retrieve/cancel/monitor)
gridinfo
a funny tool (lcg-infosites seems more useful)
jobmgr
to define Grid jobs (with the possibility to run them locally for testing)

dmgr and gpm possibly need to be adjusted:

LFC_HOST = lfc03.nikhef.nl
GRID_SE  = tbn18.nikhef.nl

GridModules

GridModules are packages which will be installed in /grid/atlas/users/${USER}/gpm. These packages reside on the SE (tbn18.nikhef.nl) and can be used for jobs. This prevents submitting (too) large jobs, note that there is a limit on the input sandbox of ~ 20-50 MB.

GridModules are available from CVS:

cvs -d /project/atlas/cvs co GridModules

A few examples of modules are present (interesting study material).

make_package
With this tool, a directory is tarred and stored on the SE (using gpm from the GridTools). Note that the file ~/.gpmrc keeps track of which modules have been installed. When running jobs locally, it will use the locally available package instead of the one installed on the SE. Be careful not to include the slash '/' when making the package!
DataManager
This package is used to copy/move/delete and define datasets on the SE. You will definitely need this package. But before doing a make_package DataManager edit DataManager/run to set LFC_HOST and GRID_SE to the desired addresses.
AtlasRelease
To use the Athena software on the the Grid, the desired release has to be set up. It actually contains only one file: setup.sh and it's similar to the setup script you need to do when setting up Athena locally. (A part of this script is actually to setup Athena locally as you can also run/test jobs locally). A few changes have taken place since release 12, my 'AtlasRelease12.0.6/setup.sh':
#!/bin/sh
#
# Setup the environment for ATLAS release 12.0.6
#
# gossie@nikhef.nl
#
LOCAL_AREA=/data/atlas/offline/12.0.6

# --- Clear command line to avoid CMT confusion ---
set -

if [ "$VO_ATLAS_SW_DIR" != "" ] ; then

  # --- Follow the GRID approach ---
  echo "Setting up ATLAS release 12.0.6 from VO_ATLAS_SW_DIR=$VO_ATLAS_SW_DIR"
  . $VO_ATLAS_SW_DIR/software/12.0.6/setup.sh
  . ${SITEROOT}/AtlasOffline/12.0.6/AtlasOfflineRunTime/cmt/setup.sh
  CMTPATH="${PWD}:${CMTPATH}"

elif [ -d $LOCAL_AREA ] ; then

  # --- Follow the local approach ---
  echo "Setting up ATLAS release 12.0.6 from LOCAL_AREA=$LOCAL_AREA"
  . $LOCAL_AREA/setup.sh
  . ${SITEROOT}/AtlasOffline/12.0.6/AtlasOfflineRunTime/cmt/setup.sh
  CMTPATH="${PWD}:${CMTPATH}"

else

  # --- ERROR: Don't know where release is!
  echo "ERROR setting up ATLAS release 12.0.6, cannot find release"
  echo "Release_12.0.6_Not_Found" > errorcode


fi
Note that release 12.0.7 is the final release of the 12 series (but it is not installed on all Grid machines yet)
TopView module
Then it's time for an 'Analysis' package (OK, dumping AODs in ntuples is not really 'analysis', though eh... what's the difference?). For CSC ntuple production here at Nikhef, the TopViewAODtoNtuple-00-12-13-03 package can be used.
The steps involved to create the package (for reference):
  • check the latest version of EventView group area:
http://atlas-computing.web.cern.ch/atlas-computing/links/kitsDirectory/PAT/EventView/
  • copy the tar-fill to a temporary directory
wget http://atlas-computing.web.cern.ch/atlas-computing/links/kitsDirectory/PAT/EventView/EventView-12.0.6.8.tar.gz
  • strip unnescessary files/directories
only InstallArea and PhysicsAnalysis are needed (note: this will be you 'testarea')
A little complication: if the latest TopView version is not in the package, compile the desired TopView libraries locally and copy them to the InstallArea that will be used in the Grid module.
  • put the needed files in the module (eg. TopViewAODtoNtuple-00-12-13-03):
tar -cvzf EventView-12.0.6.8_nikhef.tar.gz EVTags-12.0.6.8/
cp EventView-12.0.6.8_nikhef.tar.gz ${USER}/GridModules/TopViewAODtoNtuple-00-12-13-03
  • check the run scripts run and AODtoTVNtuple.py (adjust version numbers!) in the TopViewAOD module
  • check LocalOverride_Nikhef_BASIC.py for other muon/tau/jet/MET collections
  • make_package TopViewAODtoNtuple-00-12-13-03

A Grid job

Normally, a Grid job in defined via a 'jdl'-file (and managed with LCG commands). The GridTools provide a slightly easier approach using 'jtf'-files which make it easier to define all the steps and nescessary modules that need to be in a job. In the end, these jtf-files are converted to jdl-files and standard LCG commands are used. Though the bookkeeping is simpler.

Defining a job

A typical AOD -> TVNTUPLE conversion job, eg. named test_Nikhef_AODtoTVNtuple.jtf might look like this:

--req AtlasRelease12.0.6,doetnie

# --- Set up Atlas release 12.0.6 ---
AtlasRelease12.0.6

# --- Copy AOD from Grid SE ---
DataManager copy dset:test.AOD.v12000601[$SEQ/10] InputFiles

# --- Convert AOD to ntuple with TopView ---
TopViewAODtoNtuple-00-12-13-03 fullsim

# --- Store output on Grid SE ---
DataManager copy OutputFiles dset:test.TVNTUPLE.v12000601

It executes the AtlasRelease12.0.6 package to set up Athena r12.0.6, the DataManager package to copy AOD from the SE to the worker node (WN). Then the AOD is converted with the TopViewAODtoNtuple package to a TVNtuple and finally the TVNtuple is copied to the SE.
The line with [$SEQ/10]<\tt> divides the AOD dataset into 10 equal sized subsets. So if there are 50 AOD's in the dataset, each job (1-10) converts 5 AOD's. Be careful: the number of AOD's in a dataset may change over time!

Playing around with a job

gridmgr submit --vo atlas test_Nikhef_AODtoTVNtuple.jtf:1-10
gridmgr retrieve --dir ./output_test -a
jobmgr run test_Nikhef_AODtoTVNtuple.jtf:3
jobmgr listreq

Ganja

Now everything is set up, it's time to dump the AODs!

Ganja