Difference between revisions of "Using GANGA with AMAAthena"
Line 95: | Line 95: | ||
=== Using DQ2Dataset === | === Using DQ2Dataset === | ||
When using the '''DQ2Dataset''', GANGA will handle the dataset file access externally from Athena. | When using the '''DQ2Dataset''', GANGA will handle the dataset file access externally from Athena. | ||
+ | |||
+ | <ol> | ||
+ | <li/>Create a new job | ||
+ | <pre> | ||
+ | In [1]: j = Job() | ||
+ | </pre> | ||
+ | |||
+ | <li/>Setup the application: | ||
+ | <pre> | ||
+ | In [2]: j.application = AMAAthena() | ||
+ | In [3]: j.application.option_files += [ File('../run/AMAAthena_jobOptions.py'), File('../run/Trigger_jobOptions.py') ] | ||
+ | In [4]: j.application.driver_config.config_file = File('../run/exampleaod.conf') | ||
+ | In [5]: j.application.driver_config.include_file += [ File('../run/reader.conf') ] | ||
+ | In [6]: j.application.max_events = '1000' | ||
+ | In [7]: j.application.prepare() | ||
+ | </pre> | ||
+ | |||
+ | <li/>Setup the inputdata: | ||
+ | <pre> | ||
+ | In [8]: j.inputdata = DQ2Dataset() | ||
+ | In [9]: j.inputdata.dataset += [ 'fdr08_run2*physics_Muon*AOD*o3*' ] | ||
+ | In [10]: j.inputdata.type = 'DQ2_DOWNLOAD' | ||
+ | </pre> | ||
+ | |||
+ | <li/>Setup the job splitter (optional): | ||
+ | <pre> | ||
+ | In [11]: j.splitter = DQ2JobSplitter() | ||
+ | In [12]: j.splitter.numfiles = 2 | ||
+ | </pre> | ||
+ | |||
+ | <li/>Setup the backend: | ||
+ | <pre> | ||
+ | In [13]: j.backend = PBS() | ||
+ | </pre> | ||
+ | |||
+ | <li/>Submit the job: | ||
+ | <pre> | ||
+ | In [14]: j.submit() | ||
+ | </pre> | ||
+ | |||
+ | </ol> | ||
== Running AMAAthena on LCG == | == Running AMAAthena on LCG == |
Revision as of 12:17, 13 August 2008
Introduction
This document gives an step-by-step instruction for running AMAAthena within GANGA on a NIKHEF desktop (e.g. ribble). AMAAthena is a Athena package providing ... developed at NIKHEF. GANGA is an official ATLAS grid utility for distributed data analysis.
The examples below assume that:
-
Users have the following Athena job option files in the run directory of the AMAAthena package
-
AMAAthena_jobOptions.py
Trigger_jobOptions.py
-
exampleaod.conf
reader.conf
Preparation
- follow the CMT instructions to setup your CMTHOME directory
- checkout the AMAAthena package from CVS
- make sure you will start GANGA with a clear environment without any Athena and CMT setup
Starting GANGA
Typing the following commands within the directory: PhysicsAnalysis/AnalysisCommon/AMA/AMAAthena/cmt
% source /project/atlas/nikhef/dq2/dq2_setup.sh.NIKHEF % export DPNS_HOST=tbn18.nikhef.nl % export LFC_HOST=lfc-atlas.grid.sara.nl % source /project/atlas/nikhef/ganga/etc/setup.[c]sh % ganga --config-path=/project/atlas/nikhef/ganga/config/Atlas.ini.nikhef
GANGA magic functions for cmtsetup
Inside GANGA, one could deal with the complex CMT setup with two magic functions.
The following example shows how to setup the CMT environment for Athena 14.2.0 in 32 bit mode.
In [1]: config.Athena.CMTHOME = '/your/cmthome' In [2]: cmtsetup 14.2.0,32 In [3]: setup
Running AMAAthena on Stoomboot
Using StagerDataset
When using the StagerDataset, the AMAAthena job will use the Athena FileStager to copy dataset files from a grid storage.
-
Create a new job
In [1]: j = Job()Setup the application:
In [2]: j.application = AMAAthena() In [3]: j.application.option_files += [ File('../run/AMAAthena_jobOptions.py'), File('../run/Trigger_jobOptions.py') ] In [4]: j.application.driver_config.config_file = File('../run/exampleaod.conf') In [5]: j.application.driver_config.include_file += [ File('../run/reader.conf') ] In [6]: j.application.max_events = '1000' In [7]: j.application.prepare()Setup the inputdata:
In [8]: j.inputdata = StagerDataset() In [9]: j.inputdata.dataset += [ 'fdr08_run2*physics_Muon*AOD*o3*' ]Setup the job splitter (optional):
In [10]: j.splitter = StagerJobSplitter() In [11]: j.splitter.numfiles = 2Setup the backend:
In [12]: j.backend = PBS()Submit the job:
In [13]: j.submit()
Using DQ2Dataset
When using the DQ2Dataset, GANGA will handle the dataset file access externally from Athena.
-
Create a new job
In [1]: j = Job()Setup the application:
In [2]: j.application = AMAAthena() In [3]: j.application.option_files += [ File('../run/AMAAthena_jobOptions.py'), File('../run/Trigger_jobOptions.py') ] In [4]: j.application.driver_config.config_file = File('../run/exampleaod.conf') In [5]: j.application.driver_config.include_file += [ File('../run/reader.conf') ] In [6]: j.application.max_events = '1000' In [7]: j.application.prepare()Setup the inputdata:
In [8]: j.inputdata = DQ2Dataset() In [9]: j.inputdata.dataset += [ 'fdr08_run2*physics_Muon*AOD*o3*' ] In [10]: j.inputdata.type = 'DQ2_DOWNLOAD'Setup the job splitter (optional):
In [11]: j.splitter = DQ2JobSplitter() In [12]: j.splitter.numfiles = 2Setup the backend:
In [13]: j.backend = PBS()Submit the job:
In [14]: j.submit()
Running AMAAthena on LCG
Using StagerDataset
Using DQ2Dataset
When the job is completed
Working in progress
- supporting StagerDataset for jobs on the grid (LCG/NG)