NL Cloud Monitor Instructions

From Atlas Wiki
Jump to navigation Jump to search

Introduction

This page will give a step-by-step instruction for the shifters (of the ATLAS NL-cloud regional operation) to check through several key monitoring pages used by Atlas Distributed Computing (ADC). Those pages are also monitored by official ADC shifters (e.g. ADCoS, DAST).

The general architecture of ADC operation is shown below. The shifters that we are concerning here is part of the "regional operation team". The contribution will be credited by OTSMU.

General architecture of the ADC operation

Shifter's duty

The shifter-on-duty needs to follow the instructions below to:

  1. check different monitoring pages regularly (3-4 times per day would be expected)
  2. notify the NL cloud squad team accordingly via adc-nl-cloud-support@nikhef.nl.


Things to monitor

ADCoS eLog

ADCoS eLog is mainly used by ADC experts and ADCoS shifters to log the actions taken on a site concerning a site issues. For example, removing/adding site from/into the ATLAS production system. The eLog entries related to NL-cloud can be found here.

The shifter has to notify the squad team if there are issues not being followed up for a long while (~24 hours).

DDM Dashboard

DDM Dashboard is used for monitoring the data transfer activities between sites.

The main monitoring page is explained below

DDM Dashboard Explanation

There are few things to note on this page:

  1. the summary indicates the data transfer "TO" a particular cloud or site. For example, transfers from RAL to SARA is categorized to "SARA"; while transfers from SARA to RAL is catagorized to "RAL".
  2. the cloud is label with its Tier-1 name, for example, "SARA" represents the whole transfers "TO" NL cloud.
  3. it will be handy to remember that "yellow" bar indicates transfers to NL cloud.

To check this page, here are few simple steps to follow:

  1. look at the bottom-right plot (total transfer errors). If the yellow bar persists every hour with a significant number of errors. Go to check the summary table below.
  2. To check the failed transfers to NL cloud, click on the "SARA" entry on the summary table. The table will be extended to show the detail transfers to the sites within NL cloud. From there you can see which site is in trouble.
  3. When you identify the destination site of the problematic transfers, you can click on the "+" sign in front of the site, the table will be extended again to show the "source site" of the transfers. By clicking on the number of the transfer errors showing on the table (the 4th column from the end), the error message will be presented. A graphic instruction of those steps is shown below.
Steps to trace down to the transfer error messages

The shifter has to report the problem to the NL squad team when the number of the error is high.

The shifter can ignore reporting problems in case of:

  1. the error message indicates that it's a "SOURCE" error (you can see it on the error message).
  2. site is in downtime. The downtime schedule can be found here: http://lxvm0350.cern.ch:12409/agis/calendar/
  3. the same error that has been reported earlier during your shift.

Panda Monitor (Production)

The Panda Monitor for ATLAS production is used for monitoring Monte Carlo simulation and data reprocessing jobs on the grid.

The graphic explanation of the main page is given below.

Explanation of the Panda main page

Here are few simple steps to follow for checking on this page:

  1. firstly check the number of active tasks in NL cloud versus the running jobs in NL cloud. If the number of active tasks to NL cloud is non-zero; but there is no running jobs. Something is wrong and the shifter should notify the NL squad team to have a look.
  2. then look at the job statistics table below. The statistics is summarized by cloud. The first check is on the last column of the NL row indicating the overall job failure rate in past 12 hours. If the number is too high (e.g > 30%), go through the following instructions to get one failed job.

How to get job failure

Here are instructions to get failed jobs.

  1. click on the number of failed jobs per site on the summary table. This will guide you to the list of failed jobs with error details.
  2. try to categorize the failed jobs with the error details and pick up one example job per failure category.
  3. report to NL squad team with the failure category and the link to an example job per category.

The following picture shows the graphic illustration of those steps.

Finding example job and job error in Panda

Panda Monitor (Analysis)

The instruction to check Panda analysis job monitor is similar to what has been mentioned in #Panda Monitor (Production).

GangaRobot

GangaRobot is a site functional test for running analysis jobs. Sites fail to pass one of the regular tests in past 12 hours will be blacklisted. The user analysis jobs submitted through the gLite Workload Management System (WMS) from Ganga will be instrumented to avoid being assigned to those problematic sites.

The currently blacklisted site can be found here.

The shifter has to notify the NL squad team when any one of the sites in NL cloud shows up in the list.

Site blacklists to check

Shifters' calendar

view the scheduled shifters

Quick links to monitoring pages


Quick links for further investigation/operation